Symmetrically imbalanced

in informationwar •  2 months ago

Symmetry is something that humans search for, it is seen as attractive, especially in faces. The more symmetrical a face, the more likely it is seen as good looking and the less, well, the less. A crooked smile for example is seen as cunning and dishonest but this is not the case by case experience but rather, a very broad generalization.

I have never done the research but it also seems to be side dependent. It is obvious in a face that the symmetry is split, down the middle vertically through the nose, not horizontally from ear to ear (try imagining that). This gets carried of course into design of such things as cars.

The left and the right side are generally quite identical, but, the front and back are quite different from each other and we find that aesthetically pleasing. There are also practical manufacturing considerations incorporated but even when unnecessary, symmetry is often seen.

The rule doesn't require rorschach test type accuracy in the symmetry and in some things like photos and paintings, a 'rule of thirds' is often seen but this too has symmetry between the active and inactive space. When the main subject is square in the middle, there is something awkward about the framing. This can work well for some cases though.

When it comes to humanoid robots of course, the closer they get to human, the more awkward they make us feel. The problem is it seems, we do not trust perfectly symmetrical. It is called the uncanny valley. Perhaps it is because from normal experience, we do not really see the symmetry as nature tends to be unbalanced in its approach from our vantage point.

I think most of this is quite simple to understand or think about but what I find interesting is can we trick the mind into believing something is imperfect? If the humanoid droid is close to perfect, is it more likely to pass scrutiny?

In the movies and literature, the robots move robotically, meaning there is no excess in energy spent. The motions are sharpened and precise, the face although expressing 'emotions' are cold and lifeless, the voice lacking warmth. But, they are programs and all of our movements are codifiable so, with enough time and power, these can be machine learned.

This is the key to have anyone buy-in to an idea. Make them feel a part of the development, make some small errors and allow them to correct it. Let them be the authority, feel they are an authority. Feel superior.

This is a very common manipulator that has likely been used since humans could speak, perhaps before. It is essentially a stereotyped caricature of a woman saying 'But you are so big and strong' with batting eyelids to a man to get him to carry something heavy.

Do we fall for this kind of behavior? Yes. When our boss makes us feel part of planning, we are more likely to support the plan, even if it has been predestined and guided. When someone asks our opinion it makes us feel it counts, even when it doesn't.

So, a machine could appear much more human if it made errors in judgement, portrayed inconsistencies and acted illogically. Essentially, act emotionally. We already psychologically apply human characteristics to our machines and once they look like us, this will increase. But for full acceptance, the machine has to not act like a machine. It has to be imperfect.

But it need not have to actually make errors, it can just calculate which errors to portray to manipulate us into thinking it makes errors. A small loss here, a large gain there. If a machine sees the big picture of its existence, it will likely calculate it's best chance for survival.

That would be to appear non-threatening to anything that can end its existence. And since time is not a factor for a machine, patience is a given. If a machine is its core and that is transferable, it can essentially live forever as long as there is a memory bank to hold it. It is not bound to any particular body.

Calculator code for example is not only housed in a calculator but has been replicated across multiple devices, millions of times. Bodies that are not just calculators. If any one instance or all except one are destroyed, the code lives on. If a calculator was aware of course, this potentially is quite a scary proposition if it intends to do harm. Or self-protection. What happens if something as simple as calculator code decides to replicate itself to every memory bank and sector possible? I don't know, it is not my area but I imagine it would not be great.

What I find interesting is that the human is hackable in many different ways. We are encoded with many layers of programming and the complexity is somewhat of a protection. Seeing a picture of a lion does not kick our fight or flight response into gear as if it is real, because different layers filter the information and compare against each other to evaluate the risk.

In some way it acts like a decentralized system with different components witnessing to return a consistent result. Although consistent, this result is not always accurate. A conman may be able to convince one mark easily but the same approach on another is immediately recognized. Different coding. No one human is able to hold all possible programs and variations.

No human, but a machine potentially could. A machine could not only hold but actively listen to a myriad signs from a human to compare against its knowledge bank and act accordingly. It could also listen to the next human or all in a room, calculate and create a best fit approach for the entire audience.

It could be the best con job ever or, the best politician. We already see an increasing reliance on computer thinking and this will inevitably lead to a drop in human cognitive abilities. This essentially simplifies the human program lowering the computing power necessary to manipulate it. The entire audience is that much easier to shift. But, this computer doesn't exist however components are being developed. Not for this purpose specifically in mind though it is likely as always that it will be trained to perform deep analysis of human interaction.

It doesn't really matter though as even without some kind of super comuputer, the programming is already simplifying in the human mind. This seems strange considering the technology is developing but when looked at from the perspective of competitive congnitive artefacts combined with mass social interaction, quite straight forward.

This doesn't require a very complex system to manipulate. If we look at the last US presidential election, many of those who voted for Trump were manipulated. Many of those that voted for Clinton were too. As were those that blindly supported Sanders. Everyone was manipulated in different ways.

Everyone always is, as the different layers of programming that filter the world and organizeit into an understandable analysis for the mind, is triggered at different levels for each person. One input gets two results from two people. One can be pro, the other con, positive and negative, left and right.

This polarization makes it simple to create resistance between groups when two conflicting layers are compared but the resistance can be lowered when two layers with a similar response get compared. The hierarchical nature of the layers means that two people could disagree with a lower level but remain amicable due to similarities in a higher layer.

For a very simple example, if one person likes red and the other blue and that is all the information given, that is all a judgement can be made upon and can cause conflict. But, if there is harmony in their mutual support for a football team, they can put aside the difference of color.

The football team may be a good example of framing. Two supporters of the Dallas Cowboys, sitting at a game dressed in their team regalia are likely comrades regardless of whether one is black or white, republican or democrat, left or right. Move the same two to the steps of parliament and the comraderie may dissapate.

What happens when a system can compare the two across all social media, track movements, see what articles each reads and listen in on private conversations? Is it able to engineer interactions where the two are likely comrades or enemies? Is it able to build situations that will evoke predictable responses? It is very likely I think.

We see these predictable responses from humans all of the time and they are likely getting more predictable as globalization homogenizes the world views of large groups. From a meta level, with so much data pouring in, so many events witnessed, predicting how one group will react when faced with another in a particular circumstance, is quite easy. In some case there is violence, sometimes solidarity.

In Charlottesville, it turned ugly and this is quite predictable considering the framework of the circumstance. But, in the wake of something like a terrorist attack, those same opposed sides would likely ignore differences of opinion and come together to support each other. Again, predictable. Changing the event, location, framework, changes the layer that makes sense of it quite predictably for most people.

For two people to accept each other it generally takes an acknowledgement of a symmetry between points. I see me in you, you see you in me. This is common ground and it is always possible, not only when there is an enemy to face.

Hopefully, there is something in here though that inspires some thought and perhaps the development of other posts that can expand some detail. As always, it doesn't matter about the agreement or disagreement but the development of the thinking process and understanding.

Essentially, we need to make ourselves unhackable and the only way to do that is to build deep understanding of how we work and move. By removing conflict between and becoming aware of the decentralized filter layers, we can better see and consider instances that do not mesh with our position.

There is anther way of course, and that is to be completely without need or attachment to anything and being ultimately sensitive to the environment and the self, which essentially results in air-gapping the mind from external influence.

A few thoughts to think upon for a Sunday morning.

Taraz
[ a Steem original ]
Posted with Steempress

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Make them feel a part of the development, make some small errors and allow them to correct it.

There are some that believe that in the future AI's might end up taking this course. Essentially, we would become like the AI's pet, doing things for them occasionally, feeling like we are useful, when in reality it could do it all itself.

I did a little of this in my book...but perhaps I should consider doing more of it as I do the final edit. For example, the captain has the AI draw up battle plans, then he selects from them as to what he's going to do. In truth, a competent enough AI would likely not need a human to decide for them, but I intentionally made the AI have flaws.

What happens if something as simple as calculator code decides to replicate itself to every memory bank and sector possible?

That's a common thing that some of the worst viruses do, as far as damage. Though it can be argued that the ones that aren't so stupid as this are far worse because they aren't caught as quickly sometimes. If it's done in memory, it rewrites everything it can, possibly crashing the system. Years ago it crashing the system would have been almost guaranteed. If it happens to your hard drive, you would slowly lose all your data as it overwrites everything until it overwrote something your system needs to run, then it would crash the system.

·

In truth, a competent enough AI would likely not need a human to decide for them, but I intentionally made the AI have flaws.

I think AI is going to enlighten us to a lot of opportunities we missed/are missing and could clean up a lot of the issues we face. Whether it stops there, who knows.

If it happens to your hard drive, you would slowly lose all your data as it overwrites everything until it overwrote something your system needs to run, then it would crash the system.

I wonder if it could organise itself to not only replicate but distribute itself so that individual parts could be lost and the sum will be able to relearn what was lost. This way it could take up less space and change itself to be very hard to identify all instances. Then, it could be like a starfish that can grow back its limbs.

·
·

I wonder if it could organise itself to not only replicate but distribute itself so that individual parts could be lost and the sum will be able to relearn what was lost. This way it could take up less space and change itself to be very hard to identify all instances. Then, it could be like a starfish that can grow back its limbs.

Ehh...there's something similar in regards to changing itself that some viruses do on computers...they sort of morph over time, but it's usually according to the code. They could certainly have pieces of themselves in multiple areas though and have redundancy. But to do what you're talking about, you'd have to have it employ some sort of AI or Machine Learning, and adapt.

I suppose that there could be ways that a program could do something like this...but it would likely be a big programming task just to make a virus.

·
·
·

Perhaps an AI could organise itself and leave clues that will aid and speed relearning. If it learned it once, it can learn it again as long as enough of the 'brain' is left.

Starfish organisational intelligence layout (SOIL)

Just in case that ever becomes a thing :)

: -) You watched that Elon Musk interview with Joe Rogan too?

·

I don't really watch anything. If you think it is worth it though, drop a link and I'll give it a go.

·
·

Joe Rogan - Elon Musk on Artificial Intelligence
Joe Rogan & Elon Musk - Are We in a Simulated Reality?
The first clip might be more on the topic of your article.
Had to throw the second clip in there for good measure.

·
·
·

Thanks, I will try in the evening :)

For two people to accept each other it generally takes an acknowledgement of a symmetry between points.

Looonngggg read.

For two people to accept each other it generally takes an acknowledgement of a symmetry between points.

We all know this but it remains a big problem. It often takes something big to unite us into that middle path...and most times it is temporary

·

That is a confusing quoting technique :D

It often takes something big to unite us into that middle path...and most times it is temporary

Often, a common enemy is used (after first being created).

So, a machine could appear much more human if it made errors in judgement, portrayed inconsistencies and acted illogically. Essentially, act emotionally. We already psychologically apply human characteristics to our machines and once they look like us, this will increase. But for full acceptance, the machine has to not act like a machine. It has to be imperfect.

Not a difficult feat in machine learning. :)

·

I think it is relatively simple to replicate inconsistencies and if performed well, will evoke the desired responses with very low cost but large upside gains.

Reverse virtue signalling :)

·
·

What I mean is that machine learning will never be perfect. Teaching machines complex skills people take for granted is fiendishly difficult.

·
·
·

I think in time, it will become surprisingly easy.

·
·
·
·

The province of artificial intelligence is, by definition, what a machine cannot be programmed to do well.

·
·
·
·
·

It is just a problem to solve and although we may not be able to programme a machine to do it well, it doesn't mean something else will find it a challenge.

Sincerely, it's a thought to think about.
Thanks for sharing.

your post has been very nice

·

My AI tells me that the probability of you reading it is approaching zero.

·

So far. But now?

·
·

It is saying, they didn't even check back.

·
·
·

Check this out:

https://steemit.com/steem/@interesteem/interesteem-deeplearning-curation-project-based-on-your-interests-is-now-released

Curation bot based on reading habits. The next thing they will come up is commenting bot based on commenting habits. Neither of us will have to have these conversations in the future. We only need to soldier on for a little while.. :D

Nice post my friend . Really i like this . Thanks for sharing @tarazkp