The Dangers Of Predictive TechnologiessteemCreated with Sketch.

in #technology7 years ago

If only there was a way to stop criminals from committing illegal activities before any action happened. With the evolution of artificial technology and machine learning, such models are becoming more and more of a reality. There is no doubt that such technology could produce great benefits to mankind. We could greatly reduce crime and catch criminals for minor offenses saving their lives as well as making everyone else's better. But such technology also tends to make people uncomfortable. And there are good reasons to be uncomfortable.


pexels-photo-207586.jpg
Those drones have been following us for like the last ten minutes

Pre-Crime

One use of predictive modeling is pre-crime. You take a large set of information about other crimes that happened in the past and then use that information to train your model. From this information, you can take real time data and feed that into the model that was developed. Then, the model alerts you when it finds a potential crime that may occur in the near future. The first issue with predictive models: they aren't 100% correct. And you'll never be 100% correct, because you never have enough data. Some cases will be missing in your dataset or your model is not sophisticated enough.

But even with the pitfalls that exist with all predictive models, maybe we are comfortable with a few false positives. The next issue is a more philosophical one. Should a person be punished for a crime that they never actually got the chance to commit? Sure, Johnny pulled up to the convenience store with a gun in the passenger seat and a ski mask. But what if he had turned around and did nothing? What happened if that moment of closeness to the edge changed him? With predictive technology, let's say we arrested him. Certainly we can't punish him if he had committed the crime. But can we punish him at all? Will we punish people like this? We may have stopped a crime or we may messed with someone's life. We'll never know.

The Problem With Assigning Guilt

Maybe the benefits of the technology outweigh the costs. Maybe some innocent people have to be guilty. I personally think it is dangerous. How fair is it to assign guilt to a person who was very likely to commit a crime but never actually committed a crime? Sure, there is a good likelihood they would have committed the crime. But what about the case above? Isn't Johnny an innocent man? Ultimately, it will be up to society to make up their mind in this situation.

Another issue with the assignment of guilt does not come in the cases of immediate danger, but in the cases where it may be used as a preventative measure. Let's think beyond pre-crime and let's look at character development. Maybe we'll have predictive models that spit out a likelihood of someone committing a crime in the not-so-near future. Now we are assigning guilt in the far term, even before thoughts of committing any crimes happen. People will argue that we could eliminate a vast majority of crime using this technology. But at what point are we manufacturing our children? Perhaps, this is necessary. We manufacture kids to some extent today. Let's keep kids from getting into the dilemma that Johnny faced earlier. What happens if we accidently root out some form of creativity? There are always potential unknown consequences that technology tends to have that pops up later. Examples: nuclear everything, depletion of the ozone layer, and obesity.

Other Cases

But we don't have to use predictive modeling to detect criminal behavior. We could use predictive modeling to detect undesirable behavior. Note that undesirable is in the eye of the beholder and will vastly differ from person to person. Let's say a politician uses predictive modeling to detect which potential candidates pose a threat and then do preemptive research and ads to remove these candidates before they gather enough momentum to be dangerous. It doesn't sound that bad, until you realize that political organizations could use the technology to perform social engineering and run propaganda campaigns.

Perhaps we go back to the example of the children. What if we preselected the best kids based on their behavior and gave them advanced opportunities and the first pick of the opportunities in the modern world. We could use algorithms to assign kids to specialize at earlier ages and we could better utilize resources towards those kids who were more likely to become the next Einstein or Mozart. But remember models can be wrong and maybe the social engineering thing is not the best course of action. Such models may undermine the freedom and will of the kids even though they might be right about choosing the best path to success. Maybe the model might abandon kids to a future lesser than their potential.

At the end of the day, predictive models have a fundamental problem. Their basis is in predicting the future. They are rigid in nature. And rigid systems are doomed to fail in some cases. And we might never know how these failures might have turned out had we not tempered with the system. There's always danger in a wrong prediction. Example: Investing.

Sources

Image Of Drones

Sort:  

Congratulations @greer184! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

Award for the number of upvotes
Award for the number of upvotes received

Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

By upvoting this notification, you can help all Steemit users. Learn how here!

Coin Marketplace

STEEM 0.19
TRX 0.15
JST 0.029
BTC 63438.65
ETH 2632.46
USDT 1.00
SBD 2.75