Moral wars: In the future wars will last seconds

in #cybersecurity7 years ago (edited)
  • Technology is accelerating as knowledge is increasing.
  • The world is opening and people are being brought closer together in superficial ways.
  • Morally the world is very divided as many people feel morally superior to others and wish to enforce their morality on all.
  • Warfare is going to evolve due to network warfare, AI (autonomous weapons)

In a network battle of the future, the key players of highest will be neutralized nearly instantaneously by autonomous mechanisms we cannot fathom. Because we cannot fathom it as civilians, we have no defense against it. The very openness and connectedness can be used against any individual in a future conflict, as all information can be manipulated, and disinformation will become more widespread or perhaps even ubiquitous.

In the future we will all be deceived in wars which last seconds for purposes we won't understand using autonomous mechanisms we cannot fathom. I would like to be wrong about this and have more hope for the future but the current trajectory looks like it will lead to great conflict mainly due to these reasons listed below:

  • 1 Humans like to judge others and feel morally superior. This instinct will likely drive many wars where one group of humans will use force to require other groups of humans to adopt what they perceive as a superior morals, culture, etc.
  • 2 Humans like technology and automation and this will not stop. Weapons will become smaller, smarter, more precise at targeting, and more automated. Big data and AI will make for very quick efficient wars where a small number of individuals are targeted based on criteria only the machines understand.
  • 3 Blockchain and decentralization, even if there are some good use cases for the blockchain and security benefits to decentralization, there of course are risks and insecurities which will appear as well. These insecurities will be exploited in cyberwarfare scenarios.

It has been my hope that by decentralizing AI we can give humanity a chance to avoid some of these very negative outcomes. If everyone can access AI then we can actually benefit from the openness, from the connectedness, without a central entity to filter or manipulate our thinking. The point is we need to be a lot smarter, we need to be a lot better at decision making, our morals have to evolve a lot, and we need to judge less and understand a lot more. The current culture in the United States is a punishment based society where openness is being encouraged while punishments for mistakes are increasingly becoming harsher. The justice system has not evolved at the pace of Facebook, or Google, or our knowledge of neuroscience, and as a result, a completely open society while it sounds nice, is likely to result in a lot of criminals, a lot of bad laws which get enforced, a lot of social justice battles which are based on limiting concepts like nationality, race, gender, sexuality, only with much higher technology, much easier access to information, etc.

Only if the people interpreting the information evolve our thinking can we the people interpreting make better decisions on how to use our technology. This unfortunately isn't happening even in crypto space where technology is built and then how best to use it to make a better world is thought about after the fact. Our brains, our emotions, our instincts, are our limiting factors.

Our human limiting factors have to be transcended if we are going to navigate the complex society we are creating, and we can use AI to transcend our own ignorance if we develop AI with this in mind. This means as developers we will have to assume we are far more ignorant than future users of our platform will be and design the platform so that future users can learn how to improve themselves and the platform in synergism. This loop or virtuous circle of self improvement can then allow the users to improve the state of the world by improving themselves, becoming more effective people (or cyborgs if you want).

Sort:  

AI is most dangerous when only a few organizations have it and there is little oversight, involvement, or understanding. With great power comes...

And that is the current trajectory. Only a few organizations seem like they'll have the majority of the AI infrastructure and there is so little understanding of it that good people cannot even figure out how to use it for the greater good and oversight isn't enlightened. We require much more involvement, really it should be accessible to everyone, and we need enlightened oversight which really only comes after the social and cultural changes from everyone having access to AI.

Basically the regulators are reactionary and always lag behind everyone else. So we have to bring up everyone else first before regulators can think about it. Policy makers respond to polls, and polls only respond to what is considered normal at any point in time. Norms tend to change with technology, but this only happens when technology is ubiquitous and has a true and deep cultural impact.

Radio for example, or the Internet, or electricity, or written language. Blockchain is nice, but blockchain without AI is in my opinion empty. Blockchain without AI will simply decentralize the ignorance in my opinion because the AI is required for the enlightenment to take place.

I would have up voted your post but my voting power is way too low and recovering. I will resteem your post instead :)

The same is happening to me, I too have resteemed.

Better ourselves first to see a better outcome in the world !

Scary, but I unfortunately see similar outcomes.. Upvoted and followed!

Why worry?
Why do we feel bad or even scared?
Why our mind are mixed with negative things?
Why are we bothered of anything?
Why do we easily think the opposite of something?
Why do we easily complain?
Why we fear?

Have faith, do the job for the common good, sacrifice for the service of all.
Then let all things happen, as they are.
All the best has been done, and nothing more.
If I feel like I'm not satisfied, then go for the maximum and beyond if I like.

Scary to think about. What will the future bring? I think all possibilities are there.

I do agree with you in that we have to include morals in all we do. But that just it, technology will always be developed. When there is a market (and market can be created) it will become available. Even when for morality, something shall not be developed; We will still do it. Every technology can be used for good, but at the same time also for bad. AI can be good, But can also be bad. Like Internet is good for a lot of us, But because of the internet we also have a lot of bad since the Internet allows it. Technology is not to be stopped. Whether something is moral or not, it will happen.

People already have morals, the problem is ignorance. If a person is ignorant with morals it's not going to produce good outcomes. Ignorance is due to lack of knowledge and you can have a lot of morals with very little knowledge.

Education will not help because the brain itself is a limitation. To transcend that limitation is to have true morality.

I'm 'afraid' AI will become more intelligent then humans very soon; a couple of decades, maybe 50 years from now. When that happens, humans better be powered up by AI, otherwise one of the real possibilities is that human race will be wiped out. I support what Elon is doing with Neuralink and OpenAI. Both initiatives are to prevent humans from becoming the victim of it own and that of AI.

It might already be more intelligent than humans so I'm not sure it will take 50 more years. Yet even fi it's capable of processing more, or thinking faster, it's also like a baby which has to be trained and raised. Google, Facebook and other companies probably have "infant AI" being trained right now.

My concern isn't whether or not companies have AI and offer us indirect access to it (like the church did with the scripture), but I'm more concerned about how the individual can have direct access to AI without requiring permission to escape their ignorance or human limits.

If most of the problems in the world are due to the ignorance inherent in the human condition then is it really moral to require humans to be involuntarily ignorant? Is it moral to filter the AI through corporations which profit from the relief from those limitations?

We can make a case that AI is like a utility and to centralize AI is like centralizing electricity or the ability to read and write. OpenAI is something I like and support but it's not decentralized so even if it is "open" it's not exactly clear what that will mean.

I dont think that big companies as we have them now, will still exist in 20-30-40 years time from now. I do think that small companies can and will also develop AI. I actually think big power houses will crumble down at some point in time. Everybody wil have access to AI; Actually we will be integrated with AI at some point in time. Another way we can and maybe will go is the road of virtual reality, living ours lives in a virtual world.

Don't you think we have a lot of regulatory burdens preventing the evolution of companies? I am completely with you when it comes to deregulation of these virtual entities because I think in some ways it might be our only hope. If we don't evolve the concept of a company at the rate of the technology then we could end up running into some human limitations which CREATE security vulnerabilities.

Think of it like this, if you are a hacker you will be able to exploit the human vulnerabilities in any network configuration. If traditional institutions have more vulnerabilities because of how it's designed then it will be open to certain attacks and be rendered less effective. In a world of autonomous cyberwarfare then we would have a need to have institutions which can survive the worst scenarios just as the Internet was built to make sure communications could survive the worst scenarios.

Honestly what happens to the world if AI is in corporations and the corporations all get attacked or corrupted somehow? If the shareholders get pressured to use the AI for dangerous purposes? Our institutions aren't even integrated with AI and if we are to integrate we need to remove the regulations preventing it and think carefully about which regulations are necessary to keep AI safe and effective. Then we need to figure out how to make sure regulations themselves remain effective.

Wrt cyberwarfare we certainly have to do something. Today the biggest container terminal in our Rotterdam harbour ( ones the biggest harbour in the world) is down due to some randsomware, but no ransom is requested, they just closed all systems down and keep it down. This is a major incident for Rotterdam and the country. These type of events will let as start thinking how to prevent this. We humans are slow, too slow sometimes. Hopefully we are not too slow for AI. Another solution would be to upgrade 'off grid' to the new normal and shut down all technology. We will have this in our own hands, each individual. If your live is build upon non-technology, then their can be no harm from remote technology.

We need to use our AI to help us design more resilient institutions. This has been one of my motivations behind my involvement with Bitnation and similar groups who take a decentralized approach to institutions. The issue is that if everything remains centralized then in the cases where the centralized institutions fail then what are we supposed to do?

We need options to exist just in case. Also it is important to me that the decentralized institutions can evolve into something better than the centralized versions. This is why I try to avoid adding human limitations into future institutions which don't require these limits.

By human limitations I mean our ignorances, our bias, our racism, sexism, nationalism, none of this is something I would add to decentralized mode. These biases make sense in geopolitics but don't make much sense in digital politics. The reason nationalism makes sense in geopolitics is because a person can be a citizen of one nation at a time, but in digital space a person can be a citizen of multiple nations at a time rendering the old paradigm moot.

Like the way you see things. Unfortunately, it is the nature of is the nature of human beings.

Excenlente post friends, I hope you continue like this, I hope to help me with a vote in my post

Coin Marketplace

STEEM 0.30
TRX 0.12
JST 0.034
BTC 63799.64
ETH 3130.40
USDT 1.00
SBD 3.97