You are viewing a single comment's thread from:

RE: Yes, Virginia, We *DO* Have a Deep Understanding of Morality and Ethics (Our #IntroduceYourself)

I would take great issue with your belief that social psychologists are "experts" in morality, psychology in particular is a psuedo science of the first rank. If they are experts how can they talk about "systems" of morals?

This is no more than the usual turf war going in in the scientific community over which discipline is going to be the new secular priestly caste.
This based on nothing more than the disgraced and moribund peer review system which cannot be trusted any more.

But it would be a sad day for humanity is the only people to speak on any subject are the experts in that subject eh?

Sort:  

You're confusing social psychology with psychiatry/psychology.

Everyone should be able to speak on a subject but they should not make easily falsifiable claims about others. The AI fear-mongers claim that they are the only ones even looking at the problem. Reading the scientific journals makes it pretty clear that that is not the case . . . .

As you know from my post, I don't fear AI. I don't spread "fear" of AI. I promote IA, which I deem safer for the human being to AI. AI in the narrow sense, such as weak AI, I am in support of. Strong AI is the only kind of AI that I think we should be extremely cautious about, just as we would be if trying to make first contact with an alien intelligence.

But AI in the sense that it becomes self aware, and may be smarter than us? I don't think we should rush to build that before we enhance our own ability to think. We should have a much deeper understanding about what life is before we go trying to create what could be defined as an alien intelligence, which might or might not be hostile, for reasons we might or might not be capable of understanding.

What is the reason why we need to create an independent strong AI when we can just create IA, cyborgs, and many of the the benefits with a lot less risk? You can cite social psychologists but you haven't cited any security engineers or people in cybersecurity which are the people who care about the risks.

Here is a risk scenario:

China develops in secret a strong AI to protect the state of China. The Chinese people are then over time evolved to plug into and become part of this AI, but lose all free will in the process. In essence, they become remotely controlled humans to the AI, and the AI simply uses their bodies.

This scenario is a real possibly in an AI situation which goes wrong. Because we have no way to know what the AI would decide to do, or what it could convince us to do, we don't know if we'd maintain control of our bodies or our evolution in the end. It's not even decided by science that free will exists so why would an AI believe in free will? A simple mistake like that could wipe out free will for all humans and all mammals, and it could be gone for good if the AI is so much smarter that we can't ever compete.

Now I'm not going to go into the philosophy of whether or not free willI exists, only stating that it's not a scientific fact that it exists. So an AI might conclude it doesn't exist.

I am speaking about psychology, that is true, but doesn't social psychology come under the banner of psychology? I never mentioned psychiatry. As for easily falsifable claims. don't you mean easily disproved? I would define falsifiability as sceintific experimentation to prove that something is true, not to disprove it.

Do you know of Orch-OR, the Penrose/Hameroff theories? They have a different view of AI, have you read up on them?

Coin Marketplace

STEEM 0.30
TRX 0.12
JST 0.033
BTC 64420.25
ETH 3150.23
USDT 1.00
SBD 3.99