You are viewing a single comment's thread from:

RE: Can we have downvotes and at the same time prevent 'flag wars'?

in #steem4 years ago (edited)

Free flags are like free ammo, you can't really expect something good from it.

I've read so many suggestions before about this issue (limit the reward per post to the average median, remove the reward pool, lower the reward pool, change the whole concepts to PoW, focus on Steem as a utility token, remove the reward pool and use SMT for the PoB, use the SPS as a decision making vector....)

People are currently too focused on what is happening now, but I believe we should look for a long term solution. Imagine if mass adoption comes to happen, how can any entity monitor millions or billions of genuine and fake users. It's impossible and no entity will have enough resources or manpower to constantly scan the whole chain to detected abuse. I've discussed before with some people the idea of how users could store underage pornography on the chain as text then use specific toolings to recompile the data into images. What should we do then? fork them out? Because hiding them from the UI is useless since pedos aim for secrecy in the first place. A delicate question that needs the consensus of the community, otherwise, we will be known as pedo heaven.

I believe there is a need to have a Steem Foundation, a structure that can lead the discussion to reach a clear consensus. There are a lot of things that need to be changed and discussed, but we need to make sure while moving forward to not ignore the bigger picture and push things based or sensationalism (aka changes that are based on emotions, anger....).

Sort:  

I believe we should look for a long term solution. Imagine is mass adoption comes to happen, how can any entity monitor millions or billions of genuine and fake users.

This is a good point, but I think the time between now and this hypothetical future need to be considered too. The norms and practices we have in the present shape what the future is like. And if the platform in the present is more attractive to spammers than it is to legitimate posters then we're unlikely to get to mass adoption (similarly, we also have a problem if whatever anti-spam/abuse ideas are implemented are more burdensome on new users than on spammers).

" how can any entity monitor millions or billions of genuine and fake users."

This is a very good point, I totally agree.
Their must be a automatic solution, it will be impossible to analyse all posts, at least if you have nobody who is ready to pay for this a very large amount.

For now there actually aren't that many posts. To spot abuse a tool like for example SteemWorld created by @steemchiller does a rather good job. Quite often a glance at "Voting CSI" tells you already enough ...

In case community grows, I think that the final decisions could still be met by humans, but the suggestions what to analyze deeper would be offered by algorithms.

OK sounds fine, I hope you have the people who are ready to do this job.
You think they should get some reward or something for this job ?

Yes, they should get rewarded!

Free flags are like free ammo, you can't really expect something good from it.

How do you want to prevent single users like @haejin and many others to extract hundreds of dollars (per person) every day?
I don't consider this, plagiarism and spam as minor problems.
It's so discouraging for newbies to observe these kinds of behaviour. Some told me that they don't like to take part in a community where pure greed gets rewarded ...
I think there has to be any solution, and if no flags then anything else.
Of course I respect your different point of view but cannot agree here.

It's impossible and no entity will have enough resources or manpower to constantly scan the whole chain to detected abuse.

Algorithms will do. They seek and preselect suspicious activities, so that humans only have to check whether these cases suggested by software are worth to investigate further or not. In addition users can always report abuse manually.
Anyway, there is still a long way to go until mass adoption, and I think it's worth solving problems now which in the worst case could prevent to reach mass adoption at all ...

Loading...

Don't count selfvoting for reward pool will solve this issue.
If you than also add the rule that all accounts who are created with ressource credits from another account (what you can analyse at the blockchain also for the past) count for reward pool to the account who created them you will also have solved the issue with multiple accounts.

First of all there are (many) other ways to create multiple accounts than using one's main account. I could for example ask you to create "jaki02" for me and I create "wer-verliert" for you. I can also create accounts via different front ends using different e-mail addresses and phone numbers.

Furthermore your suggestion would punish honest users who create new accounts for friends using their own resource credits, because then they couldn't support these friends anymore.

Your idea also doesn't prevent schemes in which for example every day whale A upvotes ten mini posts of whale B, who rewards ten mini posts of whale C, who upvotes ten mini posts of whale A.

"Furthermore your suggestion would punish honest users who create new accounts for friends using their own resource credits, because then they couldn't support these friends anymore."

Ok this is a very good point, to be honest I was not aware of this point.

Coin Marketplace

STEEM 0.17
TRX 0.13
JST 0.027
BTC 61078.25
ETH 2671.36
USDT 1.00
SBD 2.51