Tech Giants Promise to be responsible with AI

in #ai7 years ago

Should we believe they will keep their word? According to the latest article tech giants have pledged to use AI responsibly. The article on Axios (a site I'm not very familiar with) is interesting:

Why it matters: The tech industry is trying to get ahead of growing anxieties about the societal impact of AI technologies, and this is an acknowledgement on companies' part that their data-hungry products are causing sweeping changes in the way we work and live. The companies hope that pledging to handle this power responsibly will win points with critics in Washington, and that showing they can police themselves will help stave off government regulation on this front.

Suppose for sake of argument we think of AI responsibility as we think of data responsibility? Can we say that the tech giants have been responsible in managing our data? Can we say tech giants have been responsible with regard to privacy? There is plenty of reasons why many people feel uncomfortable trusting certain tech companies. In fact if we look in the news we are constantly hearing about large breaches, leaks, and other signs of systemic insecurity.

At the same time tech companies are lobbying, have their own responsibility to protect their ability to profit for their shareholder owners. It's because of this responsibility to protect profitability for shareholders which lends me to believe that perhaps this profit motive could get in the way of protecting human rights, or human dignity, or privacy, or being responsible with AI.

Innovation in my opinion is critical and bad regulation doesn't help anyone. At the same time over regulation is also not the answer. But do we have evidence of effective self regulation in the tech industry right now on anything? Then of course there is the issue of all that data, all that AI, being spun up into one industry, or even just a few companies.

References

  1. https://www.axios.com/tech-companies-pledge-to-use-artificial-intelligence-responsibly-2500397351.html
Sort:  

According to me I do believe that we ourselves do more to protect our privacy rather then AI. Your post rises the right issue . AI only give some protocol that we can use . I appreciate that u choose the topic.

Biggest threat for me is an open AI which can learn, but it doesn't necessarily mean it'll do the "right" thing. For example, if it was programmed to stop war it then arrived to a conclusion that by killing all humans there would be no war.

their has been a growing concern about Ai over the years and are we building Terminators skynet, over the last few years Ai has been growing rapidly and has people like billionare Elon Musk worried and the crusade has been growing louder, most are concern that it will take their jobs others are concert with the application of it, will we be replace by AI, will they be our doom . lots of question that have not been answered by tech companies that has led to assumptions. and if AI goes rogue does tech companies have a kill switch, recently facebook had to shut down their AI after it started writing its own language that humans could not understand. lets wait and see what happens next

Hello no we can't trust them! Tech giants currently do not act responsibly with our personal data or tracking and spying on us. AI is only going to enable that further.

anything that is banned goes 'underground', prohibition and the war on drugs for example...how's that been working out?

I think no matter how responsible we are with AI there is always a risk factor. Someone will had a motive and will see it as responsible it down to perception.

Should we trust them? Ask yourself how well these tech giants have handled the data millions of people entrusted to them, and therein lies the answer.

Coin Marketplace

STEEM 0.16
TRX 0.15
JST 0.029
BTC 57971.70
ETH 2448.51
USDT 1.00
SBD 2.34