You are viewing a single comment's thread from:

RE: An eye for an eye? Analyzing the Steem flagging behavior

in #utopian-io6 years ago

I'd like an analysis done to estimate the percentage of users that are actually bots.

Its probably much higher than anyone thinks, and their sole purpose is to game the Reward Pool.

Call it the "Steemit Decline Index".

Because it is declining.

Sort:  

That's a tricky topic, and I expect it's pretty hard to classify 900k accounts. there are first classifier tools, but it's far from perfect. What is "a bot"? For some it's obvious, but there are lot's of half human, half machine things out there...

And do we count activity which is just automated through an otherwise manually-managed account? I would love to know how much of the day to day voting is done by people automatically voting for stuff voted by someone else (if that made any sense). That is automation driven by off-site bots, but via the proxy of a manually operated account – and I'm not sure that we can actually determine if that kind of behavior is going on in a precise way.

Maybe we could look for trains of voting activity which occur in extremely small time slices, all focused on the same comment or post, but given the widespread knowledge that you really want to vote on something around the 20 minute mark, it might be hard to determine what is being driven via automation and what is just cleverly timed manual intervention.

At a certain point all we can do is consider our sensors versus what we can derive from them, and that gets very disappointing quickly.

Exactly. A vote 3 seconds after post is created/edited/voted-by-X is probably automated, the same comment on 100 posts as well. But there are auto-votes, trails, vote-selling, pretty clever comment bots...
Autovoters have varying response times as well, so detecting auto-votes would have to happen in ranges. Also not all trailers will always vote in the same order, so this would come down to detecting an unkown list of voters within a guessed range of time...
Stuff like this can certainly be detected somehow, but the complexity and computing time can grow pretty high pretty fast and I'm sure there will be false-positives in between.

My first inclination would be "bot-like" behavior, like rapid voting outside of human response times, posting identical comments to multiple threads, easy things like that.

It would get harder to edge into the cloud of data for more precise definitions, but some of the more obvious behavior may be a starting point.

I realize it isn't an easy classification problem, but there are some edges where it could be attacked from even in a limited sense.

Coin Marketplace

STEEM 0.18
TRX 0.14
JST 0.030
BTC 60209.45
ETH 3212.30
USDT 1.00
SBD 2.43