You are viewing a single comment's thread from:

RE: A solution to the downvoting/flagging problems on Steemit

in #community7 years ago

I think this idea has some potential. I really only see a hint at one flaw at the moment and you may have covered it and I simply missed it.

What is to stop a person from creating multiple accounts, even a dozen, and doing some simple posting with them for a week, and then having an army of accounts they can use to counterflag/flag.

Yes this would be another form of Sybil attack. Yet this is also why the "whales not being able to vote" experiment was a short term solution at best. If you draw a line in the sand then the exploit will usually find out how to spread themselves out across more accounts rather than one and with their accounts spread they effectively have the same power as before the "whales cannot vote" occurrence. So, what I am seeing here is that they likely could also find ways to game this system.

The reputation system didn't initially exist. We had a big problem with spam in late July early August. People tried to combat it, but what the spam bot makers did is create an army of accounts that were small bots that all up voted each others posts. So the people trying to write bots to counter spam and the people trying to counter spam were stopped by that sybil attack.

They instituted the Reputation system ONLY as a means to deal with that. A person with a higher reputation down voting something would reduce the reputation of the target, and it could not be countered by a lower reputation. Up votes similarly could increase reputation though it tended to only work for up votes from people of higher reputation. This made it possible to effectively stop this spam in its tracks.

It was very bad for a moment before that. You could see a new post every couple of minutes and they were almost always one of these bots. You had to wade through that mess to try to find legit posts.

So it worked well for that.

It also made it so people can have a high reputation due to votes from people while not having high steem power. So they can theoretically counter a high steem power person being able to hide someone's post. This is really the only place in the entire platform where someone can impact such a person without having massive power themselves.

I was only providing that for historical information in case it might help you in some way.

Sort:  

The suggestion I made was to tie the flagging system to a high reputation of 55-60. If they make a bunch of accounts for example and upvote their post for one week, that won't give them that high of a reputation, but let say that they do. Once they engage in illegitimate flagging they would have to counter the whole community because if they flag stuff and get counter flagged their reputation will start dropping, and with each successful counterflag they would lose more reputation than the last flag, and I think exponential is the way to go. Also of course tying it into categories, because that way people have a way to legitimize the flags, and by only counting in those categories for example someone engaging in spam won't suffer exponentially more damaging flag were they to write a comment or post seen as abusive and flagged as that, giving them a chance to change their ways and not simply run the off.

The problem is still spam. Spam under the current system is only dealt with on the UI side, there is no way to stop someone with even one account to post massive posts and load up the system, theoretically this could happen in a matter of hours, gigabytes upon gigabytes. To counter that we need to have a threshold that people need to stay above and not sink to oblivion, and I am talking about oblivion where they have no way to create content and post it on the blockchain, any kind of content even memo's to wallets, so when someone gets nuked they have been neutralized and the only way they could get back would be to attempt to revive their accounts through proxies.

The same for curation system, we cannot stop sockpuppet accounts from hiding content and demoralizing people or attacking people's payouts. If we set a threshold for 30-40 reputation to unlock the ability to upvote and downvote content they will effectively stop these account as they would need to create content and get voted to unlock it, and like in the case where people try to sibyl attack through flagging, in this case lets say they try to attack through counterflagging and attempting to damage legitimate flaggers' reputations they could be policed by a small group and their accounts brought below the threshold to flag/counterflag.

So back to the curation system, if they were to artificially boost their accounts over that threshold (obviously fairly easy and not a major obstacle) and they were engaged in malicious downvoting, even with the stunted 10x more draining voting power DV, people would go to their content and flag it. For them to be marginally successful they would need to wait 30 days and get past the limitation of flagging something, but that could be eliminated outright and a time limit would start not at the creation of content but at the first effective flag, and they would have 30 days to petition the community to reverse that action.

I will write an update and include these changes soon, awfully busy at the moment but thank you for the feedback.

Loading...

Coin Marketplace

STEEM 0.28
TRX 0.11
JST 0.030
BTC 67489.61
ETH 3762.16
USDT 1.00
SBD 3.56