It was only a matter of time. 4 days, in fact, before I got the first spam account on my instance. Now, this is the most gentle, least annoying spam you could get. But it’s clear that it’s just there to fundamentally advertise a business. I don’t think this is a business setting up shop and using my pod in some legitimate way. It’s presumably under the misguided attempt at getting them a bit higher search engine ranking.
I really don’t care about this one account. It won’t hurt anything. But if I get 10,000 or 20,000 of those, I will care a lot. I’m a bit concerned that someone has gone to the trouble. My instance is so new and so unheard-of that I assume this must be automated somehow. Who would do this by hand on my instance!? My instance is utterly unknown and unimportant.
So the real question is: how do podmins get a sense of who is signing up on their pod and whether they are really legitimate users or just taking up space?
As a podmin, it would be helpful to have a user administration area where I could, for example, see “accounts created in the last 24 hours, 7 days, 30 days” etc. It would be cool if I could spot accounts that have been idle for 7, 30, 60 days. I know there’s an auto deletion mechanism, and it warns people and all that. But that mechanism largely assumes there’s a non-malicious human on the other end. Someone who created an account and forgot about it, lost their password, whatever. These are “malicious” or certainly not cooperative. They don’t care. They’re just posting a single page and presumably never coming back.
How do we spot those and potentially take action sooner than waiting 180 days or a year or whatever for the deletion timer to do its thing?
You can see created accounts in the admin section on a weekly basis. Also you can close accounts you don’t like (for whatever reason, it’s your pod, you make the rules) there.
Thanks. I hadn’t seen that part of the admin console. I totally get that I can “close accounts I don’t like”, the question is “how can I find accounts that I might not like?” And the important angle on this is metadata. It’s easy to search and find people who post objectionable content. It’s easy to spot users who never post at all.
It’s harder to spot users who post the odd public advertisement once in a while.
So the question is not “can I close accounts?” The question is “how do I curate my user population?”.
If you want to read everything your users write you can create an account where you share with all of them, so you see everything they write (only public) in the stream. But when you have a lot of users and you want to monitor all of them, that probably uses a lot of time
You’re starting to see my point. If goal is to have large numbers of people using Diaspora* pods for their everyday social media interactions, then we’re gonna need better moderating and administration tools than we have. It’s all mostly first principles. I would struggle to manage any sizeable number of users, given what tools the platform gives me today. I started this thread mainly to ask what people are currently doing. Sounds like the answer to that question is “not much”. Moreover, the platform doesn’t really support much, yet. That’s fine.
Interestingly, I’m up to 6 spam accounts on my pod now (10%). All following the same (mostly harmless) pattern. Looks like some digital ad farm has either assigned a one of their minions to do this for their clients or has found a way to automate it. Hard to tell at the moment.
Wrote to D*HQ about this awhie back, noticed a plethora of supposedly different users trolling. also a troll pod !
I’m refering to your last paragraph…
Rely on your users. If some account starts spamming a lot, someone will notice and inform you.
@waithamai You’re answering the wrong question. I’m talking about manually or automatically creating what are fundamentally pointless accounts on the pod. They’re spam in the sense that they exist to have just one or two public posts that advertise a product or service or whatever. They are created and essentially immediately abandoned. They’re not targeting anyone or being malicious, but they’re consuming resources and misusing my pod. I’ve only dealt with 5 or 6 so far, but there’s little stopping someone from spamming my pod with them. The most important point is that no one is there. There’s no human on the other end of these accounts, or at most there’s some mindless drone in an advertising farm. Or even Amazon Mechanical Turk or similar.
Your approach to abuse/spam by active humans is also naive. You can’t rely on humans to notice and report spam unless there’s a really solid and easy-to-use spam/abuse reporting system built into the platform. (I don’t think there is). Humans don’t scale. If we intend to take on hundreds of thousands of users into a single pod, the “people will let you know” approach doesn’t scale. If that’s not the goal or the right level of scale, then cool. Let’s be clear that pods are meant to be the size that a handful of humans can manually manage them.
@grumpy-podmin Raises some really good concerns. In addition to reports and other tools to allow better management of these issues, I believe there needs to be a domain/IP blacklist feature. Ideally this will be something that can be aggregated and shared between pods.
One related feature request would be an option to add a field to the registration form where a new user needs a code, set by the podmin, to create an account. For example if you only want to allow signups on a particular pod for people you know, or within a particular organization, that would be a way to implement that limitation (“Join my Diaspora pod at d.foo.com! You’ll need to us the codeword ‘dolphin123’ to set up an account”). It doesn’t answer all the issues here monitoring issue or massive-pod situation, but it does make it easier to administer the creation of a pod for a small group of people who do know each other without worrying about spam accounts. This is a feature that would also encourage small-time pod maintainers. Like, I’m considering hosting a pod, but I don’t feel comfortable taking on the legal risk of hosting content for people I don’t know personally (or having to monitor for spam accounts).
You can make your pod invitation-only, which is probably the best thing to use if you only want certain people to be able to join your pod. Your proposal of a password could easily go wrong if the password gets shared to people you don’t want to sign up – unless you use one-time-only passwords, in which case sending individual invitations (which is what is already possible) would be just as easy.
There is no requirement to open your pod to anyone – you can run it just for yourself as the only member if you want.
Bumping this topic since I think it is an important one.
I started a pod because I think D* is a good alternative for Facebook without the datagrabbing misbehavior of FB.
Also IMO it is necessary to put up as little as possible barriers to let people use D*. Therefor my pod is open for registrations.
I do share @grumpy-podmin’s view that there are currently (too) limited options for moderation and control. Even if you, like me, have a pod that is free to use, in the end there must be an option to moderate in some way. I write use to emphasize that it should not be MISused. This would not only be bad for the resources on the pod, but also for the pod reputation and last but not least, for the D* reputation.
This brings me to a question: why isn’t there a roapdmap for future plans for D*? Not because I want to pinpoint devs on “promises” made in such a roadmap, but more to have an active community discussion about what features we, the community, would like to see implemented in D*.
Hello, i am a new user here. I’m not a podmin, just a regular user on diasp.eu. However, i want to second the importance of this thread.
- For one my personal account’s newer posts are being mostly liked by seemingly nonsense accounts of users, that never comment and seem inactive themselves.
They look a bit like being a part of a planned initiative to set up a lot bogus/bot accounts (or, guessing again, maybe reactivating older or deleted accounts), but for sure they never comment nor do they have any public postings to look at themselves.
- On another point i follow the diaspora/diaspora github discussion on those matters, and from there i get the impression, that involved podmins are having arguments about this, at last ban each other from this or that service. It all seems rather unconstructive, than uniting, and somehow seems subject to the netiquette-related matters, which arised upon flame wars on members-only mailing list in the mid 90s already.
So to cut this short,
- what can be done to fend off the bogus accounts today,
- is there threads, that are discussing moral issues and technical improvements in this regard,
- and at last, which pods do you think are on the forefront of favoring friendliness between users instead of favoring just a simple allow-all policy?
Thank you for any insights.
See this post for more about this.
You can ignore users and then they can’t interact with you and your posts anymore.
This doesn’t look too healthy. Are those accounts still on joindiaspora.org or is it an old database that is reused (abused) on a new pod? Now THAT would be a security breach.
ignoring users would be, as we call it in the Netherlands, ostrich policy (putting your head in the sand, pretend it does not exist)
joindiaspora is not run by the project team. It is @zauberstuhl responsibility to look into that. He is aware, but so far, there hasn’t been a meaningful reaction. That’s all we know, and quite frankly, all we can discuss here.
I am fighting this as soon as I have some meaningful results I will make a post!
Thank you @zauberstuhl
I am curious about the outcome.
Do you need a list of example accounts for the fake likes? i don’t want to compromise anyone actually existing of course, but i can supply examples.
Benjamin, thanx for the suggestions.
I’d be happier if i wouldn’t have to actively ignore fake accounts.
I can see, that it will be hard to establish reasonable detection schemes for those, though. One possible way i saw on forums before, would be to give new users a minimum count of public posts, which they’d have to make before their account wouldn’t be suspect to auto deletion.
Older and inactive accounts could be auto-deleted after a year or so, especially when they never posted anything anyways.
Then some people might want to use their accounts only for private conversations, in that case it’s the same thing actually: If an account gets used it drops out from the auto-detection scheme, if not the user gets a warning email of possible auto-deletion including a reasonable reaction time before the deletion happens.
Generally fake accounts seem to never comment as well.
Just a few ideas.