A Strong Recommendation For Steemit's Next Hardfork: Anti-Spam Protocol...

in steemit •  10 months ago

Many will agree, Steemit is pretty cool.

However, as it continues growing, we face challenges that must be met in order to maintain the integrity of the site, community, and ensure standards of conduct are set that keep it from degrading into a shithole as new people come with their ignorance and bad habits from other social media platforms.

As has been in discussion lately, which I also wrote about a couple days ago:

Setting Standards & Examples On Steemit: Hitting Heavy With The Flag Hammer On Inappropriate Behavior...

Spam in comments has been becoming an issue here.

Whether amateurs copy-and-pasting generic messages irrelevant to the stories at hand, or using bots to outright spam multiple accounts all-day every day, there's been an increasing amount of garbage contents that add negative value to the site by diminishing the quality of the content here.

It's impersonal. It's ingenuine. It's disrespectful. And it's inappropriate.



So, the question arises: how shall we deal with this?

Flagging down individual spam comments might be the current option - though is time-consuming, and requires a user to have a significant amount of Steem Power for the action to be effective in lower a user's reputation score, thus sending a powerful message.

However, out of that last post came a simple idea:

Code in an anti-spam algorithm in the next hard fork.

Now, I'm not too technically-inclined, so can't provide exact details of how this would look like. Though, here's an overview of the concept:

The "IF" condition is set: i.e. if an account is found to have left (3) comments that are 50%+ the same content within an hour / (5) comments within a 48 hour period...

"THEN:"

  • a warning notice is sent as a reply to the comments letting the user know they've hit a spam filter
  • all rewards are removed and disabled from the comments
  • any more such comments that hit the filter(s) in the future will get flagged with a voting weight that would drop the user's reputation score by x amount



This is a basic overview.

Of course, more detailed conditions could be put into place to specify what is and isn't considered spam.

To my knowledge, there are bot accounts active which do scan posts for duplicate/plagerised content. Though I don't know how all the coding behind Steemit works, I assume that something could be worked out to link up some type of bot like this into the site protocol and the master Steem account that could use its voting power to flag offenders whose posts got detected by the bot/algorithm and hit the spam filters.



Now, I'm not sure exactly where the discussions on what goes into upcoming hard forks take place, who makes those decisions and how they get made, etc.

As such, I'd also like to request that if anyone reading this knows, to comment below on how it'd be best to submit this idea in the appropriate place.

While I'd hope this alone would receive the support and resteems to get it to the attention of the people in position to implement such a feature, this may also be a great opportunity to generate greater awareness within the community of how such matters are handled and if others have similar or other ideas as to how the site's protocols could be upgraded, where & how they may be present their ideas.



And that's that.

Short and sweet.

Hopefully this or something like it can get put into action, so Steem can continue growing higher higher standards of conduct set and auto-enforced to ensure a consistent stream of high-quality content, without annoying, space-filling spam in the comments section.

p3aCe.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Hmm... as much of an edge case as this may be, is it a bad precedent to introduce hard-coded censorship into the blockchain?

After all - banning users from repeatedly posting the same comment, as bizarre as it sounds, is censorship. And I'd argue that all forms of censorship must go through a community enforcement model, not hard-code.

In my mind, we would handle this the same way we identify and eliminate plagiarized works - via flagging, not through lines of code.

Edit: to take it one step further, I would support adding a feature that auto-hides spam comments into a Steem interface - for example, if busy.org were to not display any commments that meet the conditions you outline in your original post - because, again, it resolves the problem without resorting to hard-coded censorship.

·

Of course, this is an ideological debate. And there is no absolute answer, given there are bound to be different opinions on this - just as there were on the issue of the hard fork that split Ethereum into two chains.

I'd say fuck the ideology of absolute non-censorship. Focus on the broader context...

Spam is SPAM. If the community agrees that spam is inappropriate, fuck the particularity of "hard-coded censorship" - the community has agreed that spam is inappropriate, and uses the tools at hands to enforce that standard which shall ensure the integrity of the platform.

It's not a matter of "censorship" - it's one of defining clear rules of conduct for this shared digital space and establishing the conditions to enforce those rules. If people don't like the rules and want a space that fits their ideological standard of complete anti-censorship so they can spam with disregard for respect of the community's other members, then they can go elsewhere.

Re: flagging... good in theory. not so much in practice.

Most people won't flag.

Many don't know the function is even there. Yet more don't have a fully clear understanding of when and when it's not appropriate to use. And many may be tempted to use it, feeling it'd be appropriate, but are too fearful of using it - feeling they'd either be a "bad" person for penalizing somebody according to their own subjectivity or not wanting to piss anybody off by flagging them and perhaps reaping the consequences of making enemies that could do them greater damage in the future.

From my viewpoint, a clear agreement on what constitutes spam and setting measures in place to automate the enforcement of such a defined cultural code is the only way this issue may effectively be addressed. I could be wrong, though I'm focused on what'd be effective and best serve the community first before caring about being right...

·
·

We agree on a few clear points:

(1) spam is bad.
(2) whenever rewards are allocated to spam comments, the Steem network gets weaker.
(3) we should use the best tools available to ensure that (2) does not happen.

It's not a question of whether or not spam is good. It's a question of what the right tools are for the job.

What percent of the community (or percent of stake) should be required to agree before we hard code censorship into the protocol? I'm trying to understand how this process is different from one that could lead to censorship based on, for example, political ideas.

While it may seem strange to compare banning spam to banning ideologies - we've seen a similar path towards censorship in history regarding centralized networks (i.e. Government) - can we be sure the same thing won't happen on a decentralized network?

·
·
·

hmm... there are definitely alot of important issues related to governance coming up here. and undoubtedly, even if the whole premise is to be working towards complete decentralization, there's gotta be some form of governance...

extending beyond this one issue of spam, this does raise some good questions along the lines of "how can consensus be reached on key decisions that need to be made for the protocol to upgrade and community to advance?

truthfully, I don't know how the decisions have been made to date. there have been multiple hard forks - though what has been the process for determining what gets coded into those updates?

I'd guess that most people on the site have no clue either.

So, who is making those decisions? And if more users were to know how the process occurs, would they be freely open to step in and participate?

this might seem like a bit of a diversion, though I feel as going through this that there is a point emerging I didn't expect: there is no ideal of decentralization that can ever be fulfilled to the full extent of the ideological epitome. somewhere, someone is making decisions to drive forward the protocol. that's NOT decentralized. perhaps to some degree, as it's kind of democratic if anyone can step in and join the process. But, the majority of people won't. so there will always be some degree of centralization of decision-making as a byproduct of the way it has to be...

of the thousands of users on here, I'm guessing only a few dozen really know how they could get active in such decision-making processes. and even if ALL were to know, probably only a fraction would get involved.

nonetheless, decisions have to be made. perhaps the more people that get involved, the closer to "decentralization" we'd move.

(and maybe that'd be part of the case for futarchy - as little as I know of it, it seems there is some part of it relating to voters being invested in an outcome. or perhaps, there could be some sort of incentivization offered for users to participate in a voting process to help with the decision-making processes, thus moving closer towards that decentralization ideal.)

man, this feels like one of the most scattered comments I've ever written here. lol. as proving - not a simple, straightforward matter...

·
·
·
·

Yeah totally, it is important that as many users as possible understand how these decisions are made. To my understanding, it is the top witnesses who ultimately control the power to enact hard forks. I believe they are the ones who have the discussions, sometimes "behind closed doors" to some extent, that affect us all.

Perhaps some more research is in order, so both of us can figure out a bit better what the hell is going on here?!

Cheers dude, it was good to discuss this with you.

·
·

Is a bot classed as a person?

·
·

CommentWealth tries to flag spam for minnows who might not feel comfortable or confident flagging it themselves. But since the account is human and not a bot, obviously it only flags a minor fraction. But we're trying!

·

I think you are making a good point here, that stuff belongs at the interface not the network level and even then it's a tricky problem to solve. We can probably all agree that spam is bad. But can we all agree what is spam? I would bet that we cannot, that people have widely different opinions on what is spam and what isn't because it's not as simple as it sounds. If detecting spam was simple, you wouldn't be told to "check your spam folder" so often, despite the fact that many companies have worked for decades on solving the problem of email spam.

·
·

Yea, this is a good point - even spam is hard to define.

Hi Rok,
Its great seeing all the tips, comments and idea's on Steemit.
But to censor is to limit growth.
True freedom is choice, Don't like it, don't listen to what is posted, dont vote. That is how I choose to live.
Karma's a bitch.... for the one's whom attacked me.... I won't fall into that trap.
Peace out little bro!

That makes sense even to catch a human saying "great post" 15 times in a row manually! I work to try to make each of my comments unique and I think we all benefit from doing the same. Of course, I think the whole world would benefit if everyone thought like me which might be wrong ...

·

Amen.

And even if leaving a short comment, best put some original energy into it that distinguishes it from an absolute basi"nice/great post."

Nonetheless, it is what it is. Best lead by example...

Brazen Spam-ing

spam.jpg

Great idea!! Lets see it happen

Agree! I like the idea.

I totally agree Rok.
The spammers are getting more brazen daily and it's frustrating when it ruins a great comment thread.
It'll be interesting to see how many spammers hit this post.
Here's hoping something along the lines of what you have outlined here can be achieved, if not the reputation of the platform will eventually be ruined. Fingers crossed.

spam as alwaulys bein a problem on every interactive site.it would be nice to see it being tackled on steemit.but for now i dont see it as an issue

I agree! Low SP users can only do so much. Surely with a little code magik, we can solve the problem with these serial-spammers!

I thought the @cheetah app did that? I've seen several times where it's posted that someone is a known plagurizer..

·

However it is also dumb. If you actually own the content you are posting and have previously posted it elsewhere it can eventually lead to you being put on a list and it will start flagging your posts. Since there isn't really a person behind it it can be difficult for people to get off that list even after proving multiple times and ways that they own and produce the content.

·
·

Ahhh... I did not know that

·
·

tricky, yeah.

undoubtedly, there is some real value in the bot - able to catch those plagiarizing. yet, with it's drawbacks, as you address.

perhaps an update could include verification of a user's owning accounts on different social media sites they might be posting 'duplicate' content from, so as to preemptively exclude such from the bots' alarms getting triggered. yet even then, what'd stop the user from plagiarizing on those others sites...

the yin & yang...

·
·
·

I use Keybase to cryptographical prove my identity across sites. However a bot would not need to be so complex, just issue a random string or pass phrase the person would need to post on the other site. There could still be problems - sometimes a person's content be it words, photos, audio or video is already plagiarized but the bot doesn't know which site is the original one that an author should claim.

Mostly I think that bots should always have a complaint mechanism to their human controller. If they can't respond to requests for review add redress of wrongful actions a bot should be banned from the system IMO

·
·
·
·

That's not a bad idea...

·

such is one example of where a bot can/might be useful.

and even again, as @o1o1o1o addresses - even that has its potential downside of counterproductively targeting users who might be posting duplicates of their own comments.

such, here we go into the dualistic nature of the universe - no clear black-or-white answers all the time...

I like how you're thinking, but it wouldn't work. The spam bots would just be modified to post a random entry from a list of 10000 greetings and salutations.
I've previously suggested that authors be given a slider bar to set the minimum reputation required to comment on the post.
Ned might set his at 70, preventing a 65 like me from commenting; while a minnow would likely set theirs at 10, to try and generate as much traffic on his post as possible.
I believe this would be much fairer than the current universal 25.

·

good point.

not a bad idea. though at the same time, even that would come with the drawback of restricting the possibility of potentially great conversations initiated with comments from newer users.

please clarify: does that mean that currently, someone with less than a 25 reputation cannot comment on others' stories, or what do you mean by the "universal 25?"

·
·

That's right. Sometimes you see a message, This comment has been hidden due to low ratings.
That's what's happening. Either the user has a low rep or the specific comment has been downvoted.

·
·
·

Ah. Thanks for the clarification. I knew posts get hidden like that due to a flag, though not because the rep score was below 25.

·
·
·
·

I may be wrong, but that's what I've deduced.
It means that those under 25, like @skeptic have to post content to get back up to 25.
Their comments are all hidden until then.

·
·
·
·
·

one look at that guy's profile name - "Cynical Asshole" - left little wonder as to why he might be getting flagged and has such a low rep... lol

·
·
·
·

I may be wrong there. I've just checked his comments tab and none appear to be hidden by default.

What about some kind of reward for cleaning up spam? I believe lots of bots would flag the hell out of spammers if they could get 0.001 SBD for each bullshit comment removed

·

On second thought that wasn't a good idea... Guess some would create lots of spam so they could earn a lot from that. Let's go for the anti-spam algorithm!

·
·

Bullet dodged

·
·

Haha. Good catch. :-)

·

Bots are spam, 99% of them anyway, from what I have seen. Really do I need to constantly see ionlysaymeep saying meep, on thousands of post. Do I need to wade through the tens of thousand steemit badge crap post that are longer than most post in the comment to see what the poor sod's comment that got stuck between two of those post? Do I need to see that idiotic kiss blowing bloke blow one more up someones ass? Get rid of the useless minnow,whale,meep,kiss, bullshit spam bots and 90% of your spam problem will go away. A bot is not entitled to free speech in my opinion. If the stupid bot owner has something to say, then he/she/it can damn well say it on there own with out using a SPAM BOT KILL THE SPAM BOTS, KILL THE SPAM.

·
·

Couldn't agree more. Ban bots and 90% of the spam will go away.

·
·

that might address half the problem. though even without the bots, there are still alot of users copy-and-pasting.

and the bots... difficult to address. I dunno if there even is a way to ban them. and if there were, where are the lines drawn? outright SPAM bots, sounds fair. though as someone else here suggested, it's probable that they'd just get programmed to adapt to bypass the filters.

tough questions and answers to implement.

So true broth!!!

All good ideas have a beginning.