Pentagon and DARPA Seek Predictive A.I. to Uncover Enemy Thoughts

in #news7 years ago

By Nicholas West

I've recently been covering the widening use of predictive algorithms in modern-day police work, which frequently has been compared to the "pre-crime" we have seen in dystopian fiction. However, what is not being discussed as often are the many examples of how faulty this data still is.

All forms of biometrics, for example, use artificial intelligence to match identities to centralized databases. However, in the UK we saw police roll-out a test of facial recognition at a festival late last year that resulted in 35 false matches and only one accurate identification. Although this extreme inaccuracy is the worst case I've come across, there are many experts who are concerned with the expansion of biometrics and artificial intelligence in police work when various studies have concluded that these systems may not be adequate to be relied upon within any system of justice.

The type of data collected above is described as "physical biometrics" - however, there is a second category which is also gaining steam in police work that primarily centers on our communications; this is called "behavioral biometrics."

The analysis of behavior patterns leads to the formation of predictive algorithms which claim to be able to identify "hotspots" in the physical or virtual world that might indicate the potential for crime, social unrest, or any other pattern outside the norm. It is the same mechanism that is at the crux of what we are seeing emerge online to identify terrorist narratives and the various forms of other speech deemed to "violate community guidelines." It is also arguably what is driving the current social media purge of nonconformists. Yet, as one recent prominent example illustrates, the foundation for determining "hate speech" is shaky at best. And, yet, people are losing their free speech and even their livelihoods solely based on the determinations of these algorithms.

The Anti-Defamation League (ADL) recently announced an artificial intelligence program that is being developed in partnership with Facebook, Google, Microsoft and Twitter to "stop cyberhate." In their video, you can hear the ADL's Director of the Center for Technology & Society admit to a "78-85% success rate" in their A.I. program to detect hate speech online. I actually heard that as a 15-22% failure rate. And they are defining the parameters. That is a disturbing margin for error, even when supposedly defining a nebulous concept and presuming to know exactly what is being looked for.

The above examples (and there are many more) should force us to imagine how error prone current A.I. could be when we account for the complexities of military strategies and political propaganda. Of course one might assume that the U.S. military has access to better technology than what is being deployed by police or social media. But these systems all ultimately occupy the same space and overlap in increasingly complex ways that can generate an array of potentially false matches. When it comes to war, this is an existential risk that far surpasses even the gross violations of civil liberties that we see in police work and our online communications.

Nevertheless, according to an article in Defense One, the Pentagon wants to use these potentially flawed algorithms to read enemy intentions and perhaps even to take action based on the findings. This new system is being called COMPASS. My emphasis added:

This activity, hostile action that falls short of — but often precedes — violence, is sometimes referred to as gray zone warfare, the ‘zone’ being a sort of liminal state in between peace and war. The actors that work in it are difficult to identify and their aims hard to predict, by design.

“We’re looking at the problem from two perspectives: Trying to determine what the adversary is trying to do, his intent; and once we understand that or have a better understanding of it, then identify how he’s going to carry out his plans — what the timing will be, and what actors will be used,” said DARPA program manager Fotis Barlos.

Dubbed COMPASS, the new program will “leverage advanced artificial intelligence technologies, game theory, and modeling and estimation to both identify stimuli that yield the most information about an adversary’s intentions, and provide decision makers high-fidelity intelligence on how to respond–-with positive and negative tradeoffs for each course of action,” according to a DARPA notice posted Wednesday.

Source: The Pentagon Wants AI To Reveal Adversaries’ True Intentions

Depending on how those "tradeoffs" are weighed, it could form a justification for military deployment to a "hotspot," much as we have seen with Chicago police and their "Heat List" to visit marked individuals before any crime has even been committed. In this case, though, the political ramifications could be disastrous for even a single false trigger.

The program aligns well with the needs of the Special Operations Forces community in particular. Gen. Raymond “Tony” Thomas, the head of U.S. Special Operations Command, has said that he’s interested in deploying forces to places before there’s a war to fight. Thomas has discussed his desire to apply artificial intelligence, including neural nets and deep learning techniques, to get “left of bang.”

As Defense One rightly suggests, there is a massive gulf between analyzing Big data for shopping patterns or other online activities versus the many dimensions that exist in modern warfare and political destabilization efforts.

Whether or not the COMPASS system ever becomes a reality, it appears at the very least that military intelligence will be seeking more data than ever before from every facet of society as justification for creating more security. That alone should spark heightened debate about how far down this road we are willing to travel.

For an excellent analysis about the central concerns raised in this article, please see: “Predictive Algorithms Are No Better At Telling The Future Than A Crystal Ball”

Nicholas West writes for Activist Post. Support us at Patreon. Follow us on Facebook, Twitter, and Steemit. Ready for solutions? Subscribe to our premium newsletter Counter Markets.

Image credit

Sort:  

I was hacked

DARPA is an endless source of both laughs and horrors. This is by no means the worst or scariest idea they've had. Haha.

Mistakes ? We don't make mistakes !!


(from the classic Terry Gilliam movie Brazil)

Coin Marketplace

STEEM 0.22
TRX 0.26
JST 0.040
BTC 98939.69
ETH 3476.97
USDT 1.00
SBD 3.22