You are viewing a single comment's thread from:

RE: @thesloth @thedumpster @thedelegator @steemservices @danknugs @nextgencrypto @berniesanders You can't flag me if I don't post anymore

in #steem7 years ago

I think you're totally correct here, and I also think this is the core problem with free sign-ups!

The whole bandwidth system is predicated on a set-up where the ability to wreck the system (through bandwidth spamming) is proportional to the vested stake. The disincentive (hopefully) keeps the system viable, at least until a wealthy investor decides to burn their money to blow it up.

But unrestricted free accounts (even with only delegated SP) contradict this so fundamentally that I think they are incompatible with a functional system.

I get the feeling this is the crux of the problem connecting bandwidth errors, bots and spam:

Within the model, they can't have both free anonymous accounts and bandwidth control!

Accounts with no SP should have no ability to spam the blockchain , but they give a little bit to each account which fundamentally distorts the formula in the name of pragmatism and short-term growth.

I hope with some careful consideration you can propose a different model that will permit this, but I think it's a very tricky problem to solve elegantly. I look forward to attempting to pull apart any suggestions you have, as that is what's needed ;)

Sort:  

I am simply going to study the activity of real, confirmed human users, and dial down the bandwidth limitation to human range. Then to add power to this, the reputation limits the capability of misbehaving large stake users by subjecting them to limiting caused by muting and flagging. I also have to look closely at the reputation scoring system, because it must be entirely biased towards stake, because otherwise, how is it that my 69.8 reputation account can't even put a 0.1 ding in trolls like Berniesanders?

They didn't assess the threat model fully, assumed a lot of things that cannot be substantiated once you examine it more broadly.

I think that part of their model works ok, as long as sign-ups are either paid for, or not anonymous.

What you're proposing with fixed limits is much less elegant, but quite possibly more practical.

The way I see it, and why I am emphasising collective social judgement as the solution, is because humans are the best at identifying patterns of byzantine behaviour. AI's can do it, but only after being fed a shit-ton of data, but humans have already got a shit-ton of data, about humans, and on a group basis, have the greatest chance of actually identifying, muting and flagging the offenders down so their bandwidth is so severely limited they cannot continue to operate the account, essentially neutering it.

I don't think there is any issue with free accounts at all. They just should not be powerful enough to do harm if gathered in the thousands, and, of course, extremely vulnerable to judgement by their peers. Well, if you can use the term 'peers' so loosely when talking about generally good people versus a scumbag.

Also, I think a simple mechanism that can limit account signup spam is simply binding an IP address to the signup, that cannot be reused for a period of time. It works reasonably well, and the constrained bandwidth of Tor, and the fact that there is a quite constrained number of nodes...

This is also a reason why perhaps referrals could be used, because referrals could create a chain of reputation effects that go back to the root of a tree of signups that appears to be malicious. This can let you close a whole bank of accounts instantly.

By 'byzantine behaviour' do you mean behaviour which encourages social stratification, and reduces social mobility within the network?

Well I think if AI/Machine Learning systems can detect it, there'd be plenty of such data from this experiment!

Coin Marketplace

STEEM 0.18
TRX 0.13
JST 0.028
BTC 57367.79
ETH 3098.11
USDT 1.00
SBD 2.32