I, Robot LLC: Making Artificial Intelligence a Person

in #technology6 years ago (edited)

alt.png
In 1992, a prominent legal scholar Lawrence B. Solum wrote:

“Could an artificial intelligence become a legal person? As of today, this question is only theoretical. No existing computer program currently possesses the sort of capacities that would justify serious judicial inquiry into the question of legal personhood.”

After 26 years of progress, however, the question of granting AI agents a legal personhood, or at least a somewhat similar status, remains a widely debated topic for lawyers, regulators, and tech enthusiasts far and wide.

The EU regulators have apparently taken the lead in this regard. Recently, the European Parliament proposed a set of rules to govern the creation and use of AI agents and a specific personhood status that would fit the advanced artificial intelligence.

In May, debates in another important AI hub, the US, on the contrary, resulted in prioritizing the technology development while offering virtually no rules for the AI creators and operators to follow. According to the US President’s technology advisor Michael Kratsios, the country’s AI developers shouldn’t expect strict AI regulation, as “the administration’s primary objective is not to dictate what kind of AI systems to build or plan how AI development should proceed.” Basically, the US is going to invest heavily into domestic AI development and leave the question of whether the future will be safe and bright or dangerous and grim to the tech companies like Google, Amazon and Facebook, which have had their share of serious allegations regarding their treatment of privacy and other important things.

All-in and hands-off regulation models both have their advocates and opponents, and each party can offer a few pretty convincing arguments in their favor. Generally, this entire debate goes down to a single question: does an AI really need some legal status to successfully operate in the midst of a human society?

One of the most obvious positive aspects of legal status for AI, such as legal personhood, is a relatively clear liability. If an AI agent advanced enough to act autonomously, learn, and make decisions without a human operator, would knowingly or unknowingly violate laws or harm people, somebody should be held responsible for that.

If an AI agent isn’t “somebody”, but “something”, then it isn’t the one to blame, despite the fact that it is capable of making decisions. In this case the litigator would probably come after the machine’s creator, just like the United States Environmental Protection Agency came after the manufacturer of the car that violated the Clean Air Act due to its emissions. The other option is to blame not the creator but the person who owns and uses an AI agent, just like it is reasonable to blame the owner of a computer if they use the machine to do illegal things. Still, a car and a computer taken as an example aren’t capable of their own decisions, while some others of their kind already are. And that’s where the thing gets complicated.

If an AI agent is considered a legal person and is therefore liable for its actions or lack thereof, there is another problem: how would you punish a computer? Imagination may paint a gruesome picture of dismantled PC’s, smashed microchips, and leather-clad people pouring cold water over nervously blinking LEDs of a server rack. But in reality a “rogue” AI would likely be simply shut down until its developers figure out and solve the problems that lead to whatever their poor creation might have done. However, it isn’t an ultimate solution as well. But what is? There seems to be no clear answer that everyone would agree upon.

One Europe to Regulate Them AI

In the EU the future fate of robots is not yet decided, however, the European Parliament has already started thinking ahead in an attempt to anticipate the ramifications of extensive robot use that lies down the road. In particular, the Parliament’s legal affairs committee has passed the report that provides insight into the future regulation of artificial intelligence and everything it entails in a 17 to 2 vote.

“Advances in AI, robotics and so-called ‘autonomous’ technologies have ushered in a range of increasingly urgent and complex moral questions. Current efforts to find answers to the ethical, societal and legal challenges that they pose and to orient them for the common good represent a patchwork of disparate initiatives. This underlines the need for a collective, wide-ranging and inclusive process of reflection and dialogue, a dialogue that focuses on the values around which we want to organise society and on the role that technologies should play in it,” the European Commission has written in their collective statement on the matter.

Legally speaking, the report itself isn’t even a bill or a draft but merely a set of points of concern that legislators should address when it comes to making a law. Still, the basic provisions in the report suggest that the EU should establish certain things in the world of robotics, including:

  • A European Agency for Robotics and AI, which is yet to be created.
  • A uniform legal definition of “smart autonomous robots,” the most advanced of which will have to be duly registered, even though the registration system is yet to be developed.
  • The development of an advisory code of conduct for robotics engineers focused on ethics, production, and use.
  • The need for companies to report the involvement of robots and AI in the company’s operation for the purposes of taxation and social security fees.
  • A mandatory insurance structure for the damages caused by robots operated by companies.

“A growing number of areas of our daily lives are increasingly affected by robotics. In order to address this reality and to ensure that robots are and will remain in the service of humans, we urgently need to create a robust European legal framework,” says the author of the report, the MEP for Luxembourg Mady Delvaux.

In a few words, what the EU proposes is to establish the institution of “electronic personhood,” which in essence would make robots liable for their actions. The report compares it to the corporate personhood, another institution that allows companies to participate in legal cases in the capacity of a plaintiff or a respondent. This makes sense in the paradigm where robotic entities, whatever they may be, can actually make decisions on their own, and therefore, have to be held liable for their actions and their consequences.

This approach, however, has been widely criticized by robotics experts, engineers, medical doctors, and moral philosophers who called it “nonsensical” in their open letter. In particular, the signees said that all assumptions that underpin the report’s proposals are actually derived from “an overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities and, a robot perception distorted by Science-Fiction and a few recent sensational press announcements.”

One of the signing parties to the letter, Nathalie Navejans, a law professor at the Université d’Artois claimed that “by adopting legal personhood, we are going to erase the responsibility of manufacturers.”

Emeritus professor of AI and robotics at the University of Sheffield and Chair of ICRAC Noel Sharkey, who also signed the open letter, stated that the European Parliament’s report gives manufacturers of robots “a slimy way [of] getting out of their responsibility,” adding that “when they start bringing it to the U.N. and giving nations the wrong idea of robotics and do and where AI is at the moment, it’s very, very dangerous.”

Speaking about the impression of the advancement in robotics and AI created by media, he noted:

“It’s very dangerous for lawmakers as well. They see this and they believe it, because they’re not engineers and there is no reason not to believe it.”

Agreeing with him is Wolfie Christie, a programmer, researcher, and publicist, who said on Twitter that “this legal personhood is just a distraction” as there are “enough legal [and] practical issues with holding companies and physical persons accountable for products/services that are referred to as robots or AI.”

Associate professor at TUDelft (Netherlands) Koen Hindriks also thinks that the regulations are proposed are too far-fetched.

“People think too much about Star Trek and science fiction. We are still far from that. We need to pay much more attention to real problems that are coming. Suppose a robot operates, who is responsible?” he said in an interview.

Aida Ponce Del Castillo, senior researcher at the ETUI, doesn’t think creating electronic personhood can be that easy either.

“According to current legal theory, granting legal personality to artificial agents is complex. It is not a case of simply equating robots to corporations. This opens the personhood debate, which has always been a source of controversy (as evidenced by the status of slaves or women in the past, or other beings and corporations more recently),” she wrote.

While such statements may sound like nails on a chalkboard for technophiles and those who believe that robots would eventually become as sentient as humans, and therefore would need their rights to be respected (and established), some go as far as to argue that robots cannot even become sentient, at least in a way that human beings could perceive as something real. The opponents of the theory of sentient artificial beings claim that the very notion of sentience in humans derives from our physical experiences that establish the terms of how we perceive information or emotions. Any emotion, they say, has some physical effect and manifestation, and robots by definition cannot feel a thing, therefore their sentience will never be similar to that in humans.

This argument, of course, isn’t incontrovertible but it shows the very conceptual depth of the problem of “electronic personhood,” the depth which can’t be actually dealt with in a single EU directive. Still, most European politicians and experts agree that some regulations have to be in place, which is not the case for their colleagues across the pond. In the U.S., the situation with regulations for AI couldn’t be any more different.

Meanwhile in America

America has always been the proponent of free market, and often stigmatized European governments as too socialist. Given the pretty complex relationships between the U.S. and socialism in the past, that’s kind of understandable, however, it still plays a noticeable role in the development and regulation of new technologies, such as the AI.

The U.S. government today doesn’t have a plan how to regulate artificial intelligence. More importantly, however, it effectively says that there’s no need for such plan as free market is what made America great in the first place, and as we all know, the current American government’s official agenda is to make it great again.

In particular, Trump’s advisor for technology Michael Kratsios said in a meeting with the country’s tech giants:

“Our free market approach to scientific discovery harnesses the combined strength of government, industry, and academia and uniquely positions us to leverage artificial intelligence for the betterment of our great nation. We’ve already made America the best in the world for AI research and development. Our task now is to make sure America stays the best. In the private sector, we will not dictate what is researched and developed. Instead, we will offer resources and the freedom to explore.”

However, the tech sector seems to be way more socialist than the flamboyantly free-market loving government of their own country. The statements coming from spokespersons for tech companies suggest that the industry actually needs someone, think the government, to create some rules of the game.

“China, India, Japan, France, and the European Union are crafting bold plans for artificial intelligence. They see AI as a means to economic growth and social progress. Meanwhile, the U.S. disbanded its AI taskforce in 2016. Without an AI strategy of its own, the world’s technology leader risks falling behind,” Intel CEO Brian Krzanich wrote in his blog post.

The President and CEO of the Information Technology Industry Council Dean Garfield noted:

“In order to maintain America’s leadership on AI, the administration should continue to invest in research and development and advance programs that equip the workforce with skills of the future. We look forward to sharing how we can help advance these priorities and others at the event.”

Still, when it comes to establishing electronic personhood in the U.S., there may be no need to actually do create a new institution, or bother the lawmakers at all. Over the course of the recent debate at the University of St. Gallen, Florida State University professor Shawn Bayern has described the way an artificial intelligence entity can be held legally liable under the U.S. laws without any involvement of electronic personhood or anything similar. To do that, he proposes the model of legal personhood for an artificial intelligence entity, which means that such an entity may freely operate and be held liable for its actions under the contract concluded between two or more humans who agree that their company, i.e. legal person, would be operated by an artificial intelligence entity.

“You can set up a legal entity that has an operating agreement that gives effect to the observable state of any software system like an artificial intelligence or autonomous system, and by doing that the autonomous system gets a very close analog to legal personhood. If you ask a hundred lawyers whether or not a robot can buy a house, they’d all say no. But what I say is that it’s actually possible to do that by means of the artificial structure of a legal corporate entity or LLC entity, that all the operating agreement has to do is give effect to these autonomous systems, and now the autonomous systems can buy property, can enter contracts, can be legal agents can be legal principles all of the basic incidents of legal personhood,” he said.

All in all, America apparently doesn’t see any need for some new regulations in the area of AI. Even though the industry seems to require some, they mostly concern the general development of the technology, and does not involve “electronic personhood” directly. After all, if it ever comes to this, Mr. Bayern’s proposal could theoretically do just fine under the existing framework.

Still, there are other sides to this story. While creating a legal framework for AI operation could be relatively easy, there’s nothing easy about actual operation of an AI entity. It cannot be addressed by any law or regulatory framework as it involves incredibly profound issues related to moral philosophy and ethics of a robot.

On the Threshold of Tomorrow

At the very same debate mentioned above, Dr. Andrew Walton, a lecturer in Political Philosophy from the Newcastle University, noted that when it comes to the ethics of an AI or the ethics of interacting with it, there are three significant problems related to the idea of legal personhood for an autonomous entity. The morality gap implies that humans have moral concerns about interaction with other people, and an artificial intelligence shouldn’t necessarily have the same. Dr. Walton noted that “an artificial intelligence, if it can develop its own ideas, could easily stray into having a moral outlook that’s not very palatable.” The second gap mentioned by Dr. Walton was the punishment gap, which refers to the fact that an AI won’t fear punishment as much as humans do, so the possibility of being punished won’t deter it from wrongdoing. Finally, the accountability gap is related to the fact that there will be no natural person to blame for the an AI can do.

“I think it’s important to ask ourselves which kinds of interests people would have in setting up these [legal] entities. What I’ve suggested is with these three gaps the interest people will have in setting up these entities are not altogether good. For this reason I think, if nothing else, we need to be cautious about extending legal personhood to them at this point,” said Dr. Walton.

Yet, even considering the aforementioned gaps, a kind of a legal status granted to advanced artificial intelligent beings might be a key to the simple regulatory solution to the potential problems entailed by the widening adoption of quasi-intelligent systems and their progress towards fully-intelligent entities. It won’t require significant changes to the existing legislation governing corporations and may allow such AI beings to perform more complicated and important tasks. But if an AI goes rogue, it will be infinitely hard to find out if the reason is the intent or the negligence of its creators, or its own malicious ideas. Alternatively, a smart enough AI can take over the world, or at least a spaceship, while being a mere tool that really doesn’t want you to pull its plug.

Apart from the practical aspects above, there are other sides of the reasoning behind the personhood debate. It is hard to think about artificial intelligence without comparing it to an intelligence of a human, no matter how different they really are. Many would perceive a sufficiently advanced AI as equal. The possibility of giving such a sentient being legal personhood would seem just right, but the fundamental differences between humans and sentient machines add a big “if”.

Subconsciously, the very idea of a sentient being that is obviously superior to ourselves evokes profound discomfort, which can be traced through innumerable works of science fiction like The Matrix or Tron that, at least according to some experts, seriously influence the legislative process in the real world. There are, however, other views at the problem. Brian Newar, COO of Crypto of Korea, believes that restrictions imposed on AI should be programmatic, not legal.

“As it is, deploying anything other than closed AI that is under strict protocol to stay within particular virtual boundaries is all that is viable. Lawmakers would be doing a disservice to their constituents if they allow AI to take control of the fundamental institutions that make human civilization work,” he told lawless.tech.

Another important notion that usually comes to mind when AI is discussed is the common fear that sufficiently advanced robots can take people’s jobs, and that is why a regulation is needed. However, it’s not the first time such a debate took place: back in the 19th century, on the peak of the industrial revolution, so-called luddites were afraid of the very same thing, even though that steampunk machinery of the time wasn’t even remotely as advanced as today’s AI (which, as most experts agree, isn’t really advanced at all).

“They claimed that people’s jobs would be eliminated and those unemployed people would be rendered useless. What happened then is what will happen if AI takes over: people will find a way to make themselves useful. It might take some time, but it is inevitable if they wish to survive. People are great at surviving. Modern luddites have as flimsy an argument now as their forebears did then. Technology will move forward with or without them for the betterment of all. Market saturation and unemployment will definitely happen, but capitalist economies will sort that out in time. The unemployed will probably become the young who could, as is the way of things, become the grist for war,” Mr. Newar said.

Agreeing with him is Archil Cheishvili, CEO of GenesisAI.

“AI will reduce employment in numerous sectors through automation but the total employment numbers will remain largely unchanged. The main reason being that AI will create new industries in which people will be employed. We have a parallel from the history. Industrial revolution made horse carriage drivers unemployed but the new technology created new industries (for example car manufacturing industry),” he told lawless.tech.

Still, there are other profound issues that have to be addressed, and they go much deeper than economic or legal concerns.

Human beings are driven by an infinitely intricate combination of objective and subjective needs, stemming from their biology and morphed by their previous experience. Human mind evolved biologically as a mechanism that helped us fulfill our needs somewhat more efficiently in the given environment. It evolved socially to helped humans act in coordination and fulfill their individual needs even more efficiently. Humans developed morals, fears, and ambitions that are fundamental to all our rights, laws, principles, and punishments. Simply, we think that we can tell good from evil and thus tend to do good for the sake of our society or at least out of fear of punishment. Now try to program millenia of trial and error into your smart toaster.

AI developers also utilize something similar to the biological evolution. So-called evolutionary algorithms allow people to develop complicated neural networks, or AI, that excels at certain tasks, such as processing human language or images. Thousands of slightly different algorithms are tested against one dataset and the best specimens are picked, multiplied with miniscule alterations and tested again. In the end you have an algorithm capable of picking your favourite actor from a crowd at a stadium and a very abstract understanding of how it works. After all, many AI systems mimic the basic principles of human neural system, such as the low-level principles of image processing. But these similarities don’t cover the differences.

By any means, the question of granting AI agents legal personhood is still to be answered even in a single jurisdiction, let alone in the whole world. On the contrary, the need for a clear regulatory framework of a worldwide scale and the rules to govern the creation, the initial programming of the “sentient machines”, is very real. The three rules outlined by Isaac Asimov back in the day don’t seem to really work in the exuberantly intricate world of the 21st century.


This post originally appeared at https://lawless.tech/i-robot-llc-making-artificial-intelligence-a-person/

lawless.tech is an online magazine devoted to covering the ongoing regulatory attempts to oversee and control the newest technologies

Join our Telegram channel, follow us on Twitter and Facebook to explore how regulations will impact the latest technological advances.

Coin Marketplace

STEEM 0.17
TRX 0.16
JST 0.029
BTC 60625.81
ETH 2406.82
USDT 1.00
SBD 2.61