You are viewing a single comment's thread from:

RE: Cybersecurity in the World of Artificial Intelligence

in #security7 years ago (edited)

And on the topic of regulation, I would say governments should support efforts in digital governance. The AI can be aware of the laws at a level greater than the citizens that control them. I cannot know all the laws that exist at any given time but an intelligent agent can know this and operate accordingly. If I control intelligent agents in cyberspace and I want them to follow the law then these intelligent agents need to be made aware of the law which means the laws (and the whole concept of the way the government is set up) must be part of the knowledge base.

The government as it is set up now just wouldn't have a clue which regulations to bring and even if individual governments were to try to regulate it would only work if the regulations are on companies like Google which are centralized. At worst their regulations are just going to slow down the evolution of the technology.

Standards bodies will also likely be involved in setting guidelines and establishing acceptable architectural models. There will likely be thought leading organizations begin to incorporate forward thinking cybersecurity controls which protect digital security of systems, the physical safety of users, and the personal privacy of people.

But how will these standard bodies form and where will they come from? Will they be elitist and only represent the interests of American corporations or will they be open, global, and let anyone who is a stakeholder (all of humanity) contribute?

I would like to see greater discussions on the implications of AI as can be seen from my post and would welcome discussion with the cybersecurity community. I think that is what we need for the most part, and at least on Facebook and in other places these discusions are happening. The issue is you have some people who don't want AI to be regulated and others who want AI to be safe.

But right now best practices aren't really known. What most people agree on is a ban on autonomous weapons. I would say weaponized AI is incredibly dangerous. There is also a debate going on about whether AI must be decentralized to be maximally beneficial. There are security implications for decentralized AI and part of that is that governance in a decentralized digital environment works differently and for the most part the technologies for governance aren't built yet.

Part of the problem I expect to see is that a lot of very orthodox thinkers will dominate these debates in the cybersecurity community. As a result some unorthodox yet feasible ideas might go unheard or be missed. The idea that regulation can be decentralized, or that AI can be self regulating, isn't even considered in your blog post, but it's possible at least for the beneficial pro-social AI this can be the case. The self driving car might follow the rules of the road better than any human being, the autonomous agent might act on behalf of the human owner in only legal ways, ethical ways, and make far less legal and or ethical errors than a human would.

I myself have been working on designing a decentralized governance platform or tool which I think could be part of the solution. Funding for development of these sorts of digital governance platforms could help make AI a lot safer because the main issue in cyberspace is there is no governance really but when you have AI you also have the ability to augment and amplify order, governance, and safety. This could be done not locally, but digitally and globally, where the right tools exist where everyone can have access to AI (such as autonomous agents) which are aware of the laws, moral standards, social norms, religious traditions, ethics, as all of this is just data, knowledge, which can be automatically or manually input into a knowledge base.

Traditional governments for example could fund development of technologies like LKIF, and put their laws in a digital format which an AI can actually make use of, such as through a linked-data approach. This is something all governments would have every incentive to do because putting their legal codes in digital form would make it easier for autonomous agents in cyberspace to follow those legal codes. Federal and state government in my opinion need to help the AI to understand them and this will align the interests.

References

  1. http://www.estrellaproject.org/lkif-core/
  2. https://en.wikipedia.org/wiki/Linked_data
  3. https://en.wikipedia.org/wiki/LKIF

Coin Marketplace

STEEM 0.16
TRX 0.16
JST 0.030
BTC 57307.38
ETH 2434.94
USDT 1.00
SBD 2.32