Computers powered by AI can decipher your brain activity yet there is limited discussion of AI safety or ethics

in #cybersecurity7 years ago (edited)

As many people know already (or maybe some do not?) there is a technology called FMRI. FMRI stands for functional magnetic resonance imaging. New AI can allow computers to begin to decipher the meaning behind the detected brain impulses captured by imaging technology like FMRI. The fact that AI will likely continue to get better, more effective, cheaper, more ubiquitous, and that the technological development is unlikely to stop, now would be a good time to open up discussions on the ethical implications of a world where smart devices (Internet of Things) can read our brains, or where the government for national security reasons determines that brains need to be scanned in a similar way to how bodies get scanned at airports following 9/11.


An interesting paper discusses the ethics of using brain scanning and deciphering technology for national security and potential implications. The paper titled "Neuroscience, Ethics, and National Security: The State of the Art" offers the abstract below:

National security organizations in the United States, including the armed services and the intelligence community, have developed a close relationship with the scientific establishment. The latest technology often fuels warfighting and counter-intelligence capacities, providing the tactical advantages thought necessary to maintain geopolitical dominance and national security. Neuroscience has emerged as a prominent focus within this milieu, annually receiving hundreds of millions of Department of Defense dollars. Its role in national security operations raises ethical issues that need to be addressed to ensure the pragmatic synthesis of ethical accountability and national security.

National security organizations are not known for taking ethics seriously and the fact that there is any discussion at all about ethics is an improvement.

Elon Musk wants to ban killer robots

Now if we connect dots here we can take into account the fact that more than likely the robots of the future will be able to read our minds. The AI of the future likely will also be able to potentially take into account what people are thinking as AI is simply the "smart part" of the robot. The fact is, AI and the ability to decipher the brain are the two most disruptive technological developers in all of human history. The unfortunate problem we face is that our social institutions haven't even considered the implications of either of these disruptions and even most AI researchers haven't fully considered the risks that go along with weaponization.

It is in my opinion that "AI experts" are biased typically in favor of their own toys because if you're developing something you are not necessarily the best person to actually conduct the risk assessment. It is true that AI experts may have a deeper understanding of the current state of AI technology, just as cryptocurrency experts or cybersecurity experts might have a deep understanding of the current state of the art in their fields, but this is a separate thing from conducting a risk assessment. A risk assessment should take the input from experts in the form of opinions on what is possible and timelines, but it doesn't mean that for example Kurzweil or Ng have done an actual risk assessment into AI as neither of these AI experts are cybersecurity specialists. In fact most AI experts who are underestimating the risk of AI do not have any background at all in security and it must be noted that security is a conservative risk based field where the whole idea is to minimize or manage risk while AI researchers aren't exactly focused on the security or risk but on progressing the science to the state of the art.

So I would like to see more input from qualified cybersecurity experts into the AI safety debate. Only by having security experts weigh in can we actually have any idea what the risks and danger is. At the same time we have to also consider how various technologies are converging, such as the case with IoT, BCI, and AI. The Internet of Things can result in a world of ubiquitous smart devices, as AI continues to improve. The BCI on the other hand depending on how that technology evolves, could result in devices which can actually scan the brains.

What would an ubiquitous autonomous world of brain scanning devices look like?

Honestly I have no idea but a lot of things would be disrupted. First the entire justice system is based on the fact that a jury cannot read the brains and intentions of the suspect. In a world where juries can look inside the thoughts of every criminal then what would crime look like and would we still need prisons? We would be able to determine who is lying, we would be able to determine who genuinely feels guilty or who genuinely made a mistake vs people who did it on purpose, and we would know the motivations of everyone. When all motivations are known then prison becomes very archaic yet there has been no debate at all about what to do about the justice system even in a world with no privacy and no secrets.

How should justice work in a world where our brains are open books?


This is a question rather than some kind of answer because no single individual can answer it alone. In a world where all of our brains are open books, where all our motivations are known to AI, whether it's a company like Google, or the government, then what purpose does a justice system serve in that world? Should prisons be abolished? And if we are going to have that world is Internet access indeed a human right? A lot of questions should be addressed and what do you think justice should be assuming we get the world some people hope for which is a fully transparent open society where no one can lie and all motivations are known either to each other or to the AI?

References


Pallarés-Dominguez, D., & González Esteban, E. (2016). The Ethical Implications of Considering Neurolaw as a New Power. Ethics & Behavior, 26(3), 252-266.

Tennison, M. N., & Moreno, J. D. (2012). Neuroscience, ethics, and national security: the state of the art. PLoS biology, 10(3), e1001289.

Willmott, C. (2016). Use of Genetic and Neuroscientific Evidence in Criminal Cases: A Brief History of “Neurolaw”. In Biological Determinism, Free Will and Moral Responsibility (pp. 41-63). Springer International Publishing.

Sort:  

Science is incredible. We are going to live some fabulous things.

@dana-edwards i am amazed how you share about this wonderful topic then analyze and write your own thoughts.

This will be featured in my latest issue.

For a very long time I've been interested in working upon the development of a true AI. I even offered my services to a couple of projects with the freely offered advice that AI by human perspective is not just about an understanding of computers and the way in which they work - but it is also about human nature as it is and as it could be - and thus a multi-disciplinary team extending beyond the realm of ICT would be best suited to the task.

Never heard back.... ^_^;;;

I will refrain from upvoting specifically so as to give myself a reason to come back to this post tomorrow and read it through properly (and then upvote as it already looks intriguing). Thats how I roll (after I snooze - which is what I am about to do... ^_^).

Thanks @dana-edwards!

It's always fun to read your poste. Keep going

and will kill spammers of steemit genuinly hahaha btw i will ask that comp to read my friends mind also because they do friendship just for there profit or some work and will order that comp to kill those mofos

They still won't be able to "look" inside the mind of the suspect in court for quite some time because FMRI requires one to actively focus on something to be able to decypher it. Anyone with enough willpower and concentration can fake the active thoughts.

This isn't exactly the case. FMRI has been used in lie detection. FMRI can be used, whether it's good enough for court is another matter. Also it's not possible to fake a thought, what does that even mean?

I think you need more knowledge on how FMRI lie detection works. When you lie you use more brain areas than when you tell the truth.

Then don't lie to your own brain.

Instead of thinking about your crime, think about potatoes or something boring. =p

If someone asks you about the crime how do you avoid thinking about it if you did it?

Just potatoes. Just think of warm potatoes, smothered in cheese and butter, with some delicious, savory spices, and nothing else.

You could ask actors/impersonators or those who are just good at lying ... they all use the same technique - visualizing something untrue so clearly until the brain believes it's the truth in that moment. That's why best actors actually feel everything they act.

Maybe those guys wearing the tin foil hats were onto something. :)

Very interesting article! Upvoted and followed for more content

Elon wants to ban killer robots, and that qualifies as genius? I believe we should also ban killer laser light rays and Godzilla replicas. Did I grow a few IQ points?

p.s. that is a loaded question. You don't know how low I was starting from, so it might be easy for me to grow a couple of IQ points off my starting base, LOL

Not to be paranoid, but if we're hearing about it then it's already likely being implemented in some ways.

It's bad enough that everything we do can be creeped on, it'd be nice to at least have my brain be private unless they created tools that actually benefit me & let me program my brain & quantify any modifications.

Being able to Sim populations with their actual brains makes the software so scary to me, because there's real possibility to control...basically the entire population & there's nobody that can be trusted with that much power.

Data wise there's already so much intellectual property we aren't compensated for, it'd suck to just be a dream machine for someone else's profit.

It's all 1984, The Matrix & Eternal Sunshine of the Spotless Mind up in here.
But maybe I'm already just software uploading myself to a database all the time.

It's funny the things that used to be terrifying that we just accept. Life.

Coin Marketplace

STEEM 0.29
TRX 0.12
JST 0.033
BTC 62934.09
ETH 3118.65
USDT 1.00
SBD 3.85