You are viewing a single comment's thread from:

RE: @thesloth @thedumpster @thedelegator @steemservices @danknugs @nextgencrypto @berniesanders You can't flag me if I don't post anymore

in #steem7 years ago

I am simply going to study the activity of real, confirmed human users, and dial down the bandwidth limitation to human range. Then to add power to this, the reputation limits the capability of misbehaving large stake users by subjecting them to limiting caused by muting and flagging. I also have to look closely at the reputation scoring system, because it must be entirely biased towards stake, because otherwise, how is it that my 69.8 reputation account can't even put a 0.1 ding in trolls like Berniesanders?

They didn't assess the threat model fully, assumed a lot of things that cannot be substantiated once you examine it more broadly.

Sort:  

I think that part of their model works ok, as long as sign-ups are either paid for, or not anonymous.

What you're proposing with fixed limits is much less elegant, but quite possibly more practical.

The way I see it, and why I am emphasising collective social judgement as the solution, is because humans are the best at identifying patterns of byzantine behaviour. AI's can do it, but only after being fed a shit-ton of data, but humans have already got a shit-ton of data, about humans, and on a group basis, have the greatest chance of actually identifying, muting and flagging the offenders down so their bandwidth is so severely limited they cannot continue to operate the account, essentially neutering it.

I don't think there is any issue with free accounts at all. They just should not be powerful enough to do harm if gathered in the thousands, and, of course, extremely vulnerable to judgement by their peers. Well, if you can use the term 'peers' so loosely when talking about generally good people versus a scumbag.

Also, I think a simple mechanism that can limit account signup spam is simply binding an IP address to the signup, that cannot be reused for a period of time. It works reasonably well, and the constrained bandwidth of Tor, and the fact that there is a quite constrained number of nodes...

This is also a reason why perhaps referrals could be used, because referrals could create a chain of reputation effects that go back to the root of a tree of signups that appears to be malicious. This can let you close a whole bank of accounts instantly.

By 'byzantine behaviour' do you mean behaviour which encourages social stratification, and reduces social mobility within the network?

Well I think if AI/Machine Learning systems can detect it, there'd be plenty of such data from this experiment!

Coin Marketplace

STEEM 0.18
TRX 0.13
JST 0.028
BTC 57709.05
ETH 3100.20
USDT 1.00
SBD 2.33