The dark side of KIK: Response to Point

in #response7 years ago

The video at the bottom covers KIK, a messenger app, and how it has grown into a potential basecamp for pedophiles online. Point, a UK-based media company, published it in early August 2017.

KIK logo

They raise a valid concern: KIK has become increasingly utilized by groups of sexual predators. Is this KIK's problem?

I think it is.

Not just theirs, though.

I think it effects every social media platform and messenger app on the planet. Anything that allows users to automate or search based on age can be utilized by pedophiles to hunt for their next victims. For this reason, it is everyone's problem.

Damned pedophiles.

It's not something that many software developers would think of. "How can I avoid making my program something that pedophiles will use/exploit?" isn't a question that crosses the normal person's mind. Most likely this is because most people don't think about the behavior of pedophiles regularly. This doesn't matter.

Anonymity is a double edged sword

What matters is that if this problem is not routed out, all apps will begin facing an increasing demand to de-anonymize themselves. And between us, I like being anonymous on the internet. I like having conversations with people I don't know - I feel like it allows us to more easily talk about what's on our mind, or ask questions that could be embarrassing to have tied to us. It helps some of us learn.

point logo

Without anonymity, it can be harder to communicate. However, with anonymity, it's easier to commit some crimes.

This is the crux of our problem. This is what must be solved if we are to prove that anonymous apps are not unhealthy for a society. Anonymity is a double edged sword.

So, how?

Not easy.

Obviously, you want social apps to allow users to be social. Placing filters on younger user profiles could be an option, however, censoring the internet is a moral grey area in and of itself. Creating a feature that allows parents/guardians to determine censorship for the children they're responsible for could be an appropriate focus. So could other systems, such as flaring age differences or monitoring for keywords that could be red flags.

Most importantly, though, you want to provide your users with resources so that they know how to handle being approached by questionable individuals. Just like when you tell ol' grandma not to click on every link from her FWD: FWD: fwd: chainmails, you might need to hammer this point home for some younger users.

All devs should begin placing worth in automated filters and bots that stop and report pedophiles. Negobot is an early example of one such creation that used to run in chatrooms, which used adaptive AI to learn to report pedophiles.

The smart application of automation could help. Just under 750,000 pictures of child porn are uploaded daily by pedos, according to Microsoft. The corporation works with Google (and most likely several other heavy hitters) to weed out these pictures. But that's not enough.

Bots can't solve everything. Humans, including other anonymous humans, need to play an active role in stopping this type of behavior. A solid reporting structure must be in place, where users can flag content or ask for help when something questionable begins to happen. The reporting structure cannot rely on humans alone - the time needed to execute it would make it an impossible task - but humans are the centerpiece of making it work. Partnering with local law enforcement may be another valid option, however, it would not prevent attackers that circumvent location tracking.

Back before automation and AI started taking off, enforcing the TOS and reporting violators was a time consuming business. If developers are able to create a good reporting and follow-up system, with the help of AI/automation, this problem can be stopped before it spreads too far (at least on the apps in question).

It may have spread too far already, though. In Point's video, as soon as they create an underage profile and broadcast it, they receive several pervy messages form older individuals.

Creepy shit.

Web devs who organize forums and other means of online communication should take note as well - this doesn't just apply to apps.

Parting thoughts

So, in summary, here's what Point's video made me think devs should focus on:

  • A good reporting system
  • Monitoring/flaring/voluntary censoring options
  • Providing users with resources that they can use to protect themselves
  • AI/automation
  • User buy-in to report
  • Harsh penalties for those that break this type of TOS

I wonder if any of you have similar thoughts. How do you think KIK could solve its problem?

Coin Marketplace

STEEM 0.30
TRX 0.12
JST 0.032
BTC 59179.00
ETH 2969.17
USDT 1.00
SBD 3.75