You are viewing a single comment's thread from:

RE: Hey Steemit. Let's Talk About Flagging. Again.

in #flagging7 years ago

Tell me what you think about my suggestion of trying a new flagging system along side a stunted downvoting, by increasing the voting power needed for one by 10 times an equal upvote, and then decrease the power delegated from SP through the downvote to 66% or less.
Here is the post.
https://steemit.com/community/@baah/a-solution-to-the-downvoting-flagging-problems-on-steemit

Also I should edit it to reflect a more recent adaptation: tie the curation to reputation over 30-40 just like flagging would be, to stop sockpuppet accounts and be able to confine bad actors to simply creating content until they can climb back up in reputation..

Sort:  

Interesting. Thanks for sharing your views. I think anything done without respect to VESTs creates a Sybil risk and puts a lot of trust in a reputation system which could then be gamed without any cost. Example: someone reates 100 bot accounts and they all vote each other up and look like real people. They all gain reputation over time and eventually have enough reputation to cause havoc on the whole system. This is a complicated problem, to be sure. I'm not yet convinced what we have no isn't the best solution yet possible.

From what I've seen, usually when people get flagged to oblivion, they've done something pretty stupid or reacted to something with high negative emotion. I might be wrong there, but that's what I've seen repeatedly.

It couldn't work for two reasons:
First they would need to create a bunch of content and have the rest of the community vote them up to gain reputation. They couldn't vote themselves up as the curation system will only be enabled at 30-40 reputation.

The second reason is that they couldn't gang up on anybody. If they for example flag one content with 100 flags, their flags will be invalidated with just one counter flag, in turn another counter flag will damage the first flagger's reputation. In that example for them to be marginally successful they would need to spend a lot of time and effort to plan each individual attack and create bots to flag as new counterflag add up to their flags. Eventually only 100 people would be needed to counter their efforts and reveal the content and not affect the authors reputation, and if they flag as each new counter flag manifests, 200 counter flags would punish all flaggers reputations, as they couldn't waste their efforts and flag without counterflags present because that would lock in their flags and count only as gestural flags. Plus the prospect is that 100 flags vs 99 counter flags will only affect the authors reputation as one flag, 1000 flags will only count as one flag, because it's one offense per content, not 1000, and the author would have 30 days to petition enough people to nullify the flags, so if the flags are malicious that effort could be stalled effectively.

In the other situation where they go on a rampage and counter flag legitimate flags in order to damage those flaggers reputation they would become targets of a movement decimating their reputation in no time and curtailing their accounts to curation and content creation only.

Of course those situations depend on them building high enough reputation by creating content and not by spamming, trolling or plagiarizing as that would be easy and rewarding to counter with flags, as policing the community should be, not a tax on voting power.

Tell me what you think, I believe that vests should be curtailed only for upvoting and that is not to say we keep the current voting curve, a flatter more balanced curve is needed, that was the main problem with hf16-17, and downvotes should cost 10x the voting power as you mentioned because they are negative actions overall and should be modeled after the real world where negative bias is evident, not a binary vulcan mindset. People would be incentivized to upvote more and bad actors will be drained 10x faster if they autoflag people.

This will create a whole clique within the community that will be happy to address bad actors with the new system, just the same as the chat channel for abuse, they will be given reports of content that wasn't flagged for justifiable reasons (as disagreement on rewards will be dealt with the downvoting system) and people could foster an environment geared towards reputation as a metric for good standing in the community. The great thing about this suggested system is that people can reverse other's flags or invalidate successive counterflags on valid flags with effectiveness and balance, not by money makes right.

Reputation is not consensus and cannot be used to limit operations on the blockchain.
As we can see with noganoo how abusable it is, the current system is too flawed to use it for anything but hiding negative value posts really.

What is that mean? Consensus is not reputation, yes, consensus is agreement, reputation is a system. But to say it cannot be used to limit operations on the blockchain is ridiculous as I want to know WHY and HOW that is, for one why can we not have a system of reputation that LIMITS operations on the blockchain, is reputation not a metric tied to individual accounts and used to determine numerous factors, and (how) therefore used to determine other factors as well?

"Consensus" on a blockchain means that every node in the network can verify it via code. This is not the case with the reputation, so it cannot be used to limit operations on the blockchain.
And it shouldn't be, because it's flawed. Not everyone with a 0 reputation is a spammer, not everyone with a high one is a saint.

Bump, maybe you didn't see my comment/reply to this but I am waiting to hear back.

I can't really answer that ;-) I assume that it's too costly to check each posters reputation for the witness nodes, but for an in detail explanation I'm the wrong guy.
What I do know is that its rules are quite arbitrary, and we already had multiple cases of rep abuse. That's why I'm opposed to using that metric for anything but GUI filtering. Maybe someone will come up with some better metric in the future, but I wouldn't bet on it.

I am not trying to be thick but the reason is at best a guess, and little if any understanding can be derived from that guess, so do you have anything substantial to back it up so we can explore this thoroughly or know a more suitable person who can offer such information?

I am also interested in what abuse you're talking about and if it's vulnerable as such what the hold up in addressing the issue. The other issue is why not fix it instead of creating a whole other system?

I still don't understand how reputation cannot be verified on the blockchain, it's not a metric that is tied to each individual account? So I guess the questions would be why is it not the case with reputation, and how is it not the case, unless I somehow didn't get the premise of reputation correct, it being a metric that is tied to each individual account.

Let's assume it is, then it can be used to limit operations, and it should be, because without limits we have no way to counter abuse, and limits require absolutes, while people aren't absolutes, it's not the point of who's a sinner or a saint then.

Because it is absolute it's not a detriment in any sense, it's because the current system has not implemented what I am talking about that it cannot deal with noganoo spam attacks, or any big whale wrecking accounts and engagement/retention.

I hope there is some validity to these assertions or explanation for them, right now they are without logic or rhetoric, therefore they fall down by the first gust of wind from critical questioning.

I wish I would have seen this comment days ago, somehow it got lost and it obviously didn't help not noticing/seeing it, but now that it's been addressed I hope it renewed a little discussion in this direction and in the problem of flagging as a whole.

Agreed, well commented @pharesim

I don't see why a system which treats N flags the same as a 1 flag would improve things. If 10 people are upset at what someone posted compared to 1 person being upset, those clearly have different weights, IMO.

Note, I said "person," not account. I think an identity system may need to be built into a reputation system for it to really function effectively. See my post on Privacy, Identity, and Human Flourishing for more thoughts on that.

Loading...

Coin Marketplace

STEEM 0.16
TRX 0.15
JST 0.028
BTC 56905.43
ETH 2398.24
USDT 1.00
SBD 2.26