Artificial intelligence is also prone to discrimination

in #technology7 years ago

Artificial intelligence (AI) can not only be used to automate processes, but also for decision-making. In doing so, care must be taken to ensure that human rights are observed and that there is no discrimination, writes, inte ralia, Erica Kochi, Head of Innovation at UNICEF.

The possibilities that artificial intelligence offers this world, from discovering medicines to reducing CO2 emissions, are growing at astonishing speed.

Computers can learn from large amounts of data and use them to make predictions. Machine learning (ML), part of AI, is already working on ways to improve accessible financial services, citizen involvement, affordable healthcare and all kinds of services.


Erica Kochi, Co-Founder UNICEF Innovation. Image source.

Every day we discover new ways to improve the lives of people through machine learning. Often, in just a few days or weeks, those discoveries can be translated into applications that affect people's daily lives. Machine learning is one of the most powerful tools that humanity has ever developed - and it is even more important to learn to use that power for the better.

A lot of agitation about AI has to do with automation: what happens when robots replace our jobs and our army, or when they start driving cars for us? A dimension of automation that usually receives less attention is the automation of decision making.

Through machine learning, decisions are already being made that change the lives of people. In New York City such systems are used to determine where the waste needs to be collected, how many police officers need to go to which neighborhood and whether a teacher can keep his job.

Non-transparent

If we give machines the power to make critical decisions about who may or may not stay active within a certain, vital community, we need to be alert to discrimination.

After all, machine learning is only a means and the responsibility lies with the people who use this instrument ethically. In other words - these applications must be developed in a way that increases efficiency while protecting human rights.
Automating decisions via technology is not new. But the nature of machine learning - the ubiquity, complexity, exclusivity and opacity - can reinforce old problems with regard to unequal opportunities.

Not only can discriminatory effects undermine human rights, they can also lead to the erosion of public confidence in companies that use ML technology.

Discriminating machines

Most stories about discrimination in machine learning come from an American or European context. This involves, for example, photo tagging by Google, in which a mechanism accidentally categorized a picture of two black friends as gorillas. There are also cases of predictive security tools that reinforce prejudice against certain groups.


image source

In many parts of the world, especially in middle and low-income countries, the impact of ML can have far-reaching consequences for people's daily lives, if there are no adequate guarantees to prevent discrimination. We already know a little about how that can look like. For example, insurance companies can make predictions about the health risks of individuals.

At least two multinational insurance companies in Mexico use ML to find out how they can optimize their efficiency and profits. It is clear that one of the ways to do this is to get as many customers as possible who are healthy (and therefore have little expense). And to keep out customers who are less healthy and therefore incur higher costs.


Image source

It is not difficult to imagine that these companies, in Mexico and elsewhere, can use ML to analyze a large amount of collected data (ranging from online purchases to public and demographic data) to recognize patterns associated with 'high-risk customers' '. These customers can then be excluded or being forced to pay a higher premium. Part of the population, the poorest and the sick, cannot afford it and is in danger of being excluded from care.

Banks

In Europe, various banks already use models that predict who may no longer be able to pay interest or repayments in the future, and how best to intervene. You can imagine that a programmer in India builds an application that assesses mortgage applicants on a number of variables related to income, to which more importance is attached than to the fact that someone never missed a payment in the past. It is a bit like being convicted of a crime that you have not committed yet.

There is a chance that such an application systematically categorizes women (especially those who are already marginalized on the basis of caste, religion or education level) as less creditworthy because historically they have a lower income. Even if they turned out to be better payers than men in the past.

The algorithm may be "accurate" in determining who earns the most money, but it overlooks crucial, context-specific information that leads to a more accurate and fairer approach.

What companies can do

These scenarios show us that ML can do a lot of good things for humanity, but that this outcome is not self-evident. Companies can do a number of things to protect human rights:

  • Set standards within the relevant industry to ensure that ML operates in a fair and non-discriminatory manner;

  • Drafting internal codes of conduct and introducing remuneration models for behavior that contributes to respecting human rights;

  • Thinking about the broader impact and identify the risks before a system based on AI is introduced;

  • Use an inclusive approach in the design, guarantee diversity in development teams and make designers aware of their responsibility in this area;

  • Optimize models in the areas of honesty, accountability, transparency and workability, among other things by using open source data and sharing algorithms;

  • Monitor and refine algorithms - they must work in different situations and remain relevant to their context;
    Measuring, evaluating and reporting;

  • Open channels to make the impact of ML transparent and to share with representative groups within the population that is affected by the system.

These new technologies are primarily a means made by people and for people. Machine learning must be designed and used in such a way that everyone benefits as much as possible while minimizing the risks of human rights violations.

If I look back at this post within 10 years, I hope that many of these proposals will be implemented. Otherwise I fear for even more inequality in this world.

Sources:
-Source 1
-Source 2
-Source 3

Sort:  

Hello keysa!

Congratulations! This post has been randomly Resteemed! For a chance to get more of your content resteemed join the Steem Engine Team

Coin Marketplace

STEEM 0.19
TRX 0.18
JST 0.034
BTC 89093.88
ETH 3168.62
USDT 1.00
SBD 2.75