Symmetry is something that humans search for, it is seen as attractive, especially in faces. The more symmetrical a face, the more likely it is seen as good looking and the less, well, the less. A crooked smile for example is seen as cunning and dishonest but this is not the case by case experience but rather, a very broad generalization.
I have never done the research but it also seems to be side dependent. It is obvious in a face that the symmetry is split, down the middle vertically through the nose, not horizontally from ear to ear (try imagining that). This gets carried of course into design of such things as cars.
The left and the right side are generally quite identical, but, the front and back are quite different from each other and we find that aesthetically pleasing. There are also practical manufacturing considerations incorporated but even when unnecessary, symmetry is often seen.
The rule doesn't require rorschach test type accuracy in the symmetry and in some things like photos and paintings, a 'rule of thirds' is often seen but this too has symmetry between the active and inactive space. When the main subject is square in the middle, there is something awkward about the framing. This can work well for some cases though.
When it comes to humanoid robots of course, the closer they get to human, the more awkward they make us feel. The problem is it seems, we do not trust perfectly symmetrical. It is called the uncanny valley. Perhaps it is because from normal experience, we do not really see the symmetry as nature tends to be unbalanced in its approach from our vantage point.
I think most of this is quite simple to understand or think about but what I find interesting is can we trick the mind into believing something is imperfect? If the humanoid droid is close to perfect, is it more likely to pass scrutiny?
In the movies and literature, the robots move robotically, meaning there is no excess in energy spent. The motions are sharpened and precise, the face although expressing 'emotions' are cold and lifeless, the voice lacking warmth. But, they are programs and all of our movements are codifiable so, with enough time and power, these can be machine learned.
This is the key to have anyone buy-in to an idea. Make them feel a part of the development, make some small errors and allow them to correct it. Let them be the authority, feel they are an authority. Feel superior.
This is a very common manipulator that has likely been used since humans could speak, perhaps before. It is essentially a stereotyped caricature of a woman saying 'But you are so big and strong' with batting eyelids to a man to get him to carry something heavy.
Do we fall for this kind of behavior? Yes. When our boss makes us feel part of planning, we are more likely to support the plan, even if it has been predestined and guided. When someone asks our opinion it makes us feel it counts, even when it doesn't.
So, a machine could appear much more human if it made errors in judgement, portrayed inconsistencies and acted illogically. Essentially, act emotionally. We already psychologically apply human characteristics to our machines and once they look like us, this will increase. But for full acceptance, the machine has to not act like a machine. It has to be imperfect.
But it need not have to actually make errors, it can just calculate which errors to portray to manipulate us into thinking it makes errors. A small loss here, a large gain there. If a machine sees the big picture of its existence, it will likely calculate it's best chance for survival.
That would be to appear non-threatening to anything that can end its existence. And since time is not a factor for a machine, patience is a given. If a machine is its core and that is transferable, it can essentially live forever as long as there is a memory bank to hold it. It is not bound to any particular body.
Calculator code for example is not only housed in a calculator but has been replicated across multiple devices, millions of times. Bodies that are not just calculators. If any one instance or all except one are destroyed, the code lives on. If a calculator was aware of course, this potentially is quite a scary proposition if it intends to do harm. Or self-protection. What happens if something as simple as calculator code decides to replicate itself to every memory bank and sector possible? I don't know, it is not my area but I imagine it would not be great.
What I find interesting is that the human is hackable in many different ways. We are encoded with many layers of programming and the complexity is somewhat of a protection. Seeing a picture of a lion does not kick our fight or flight response into gear as if it is real, because different layers filter the information and compare against each other to evaluate the risk.
In some way it acts like a decentralized system with different components witnessing to return a consistent result. Although consistent, this result is not always accurate. A conman may be able to convince one mark easily but the same approach on another is immediately recognized. Different coding. No one human is able to hold all possible programs and variations.
No human, but a machine potentially could. A machine could not only hold but actively listen to a myriad signs from a human to compare against its knowledge bank and act accordingly. It could also listen to the next human or all in a room, calculate and create a best fit approach for the entire audience.
It could be the best con job ever or, the best politician. We already see an increasing reliance on computer thinking and this will inevitably lead to a drop in human cognitive abilities. This essentially simplifies the human program lowering the computing power necessary to manipulate it. The entire audience is that much easier to shift. But, this computer doesn't exist however components are being developed. Not for this purpose specifically in mind though it is likely as always that it will be trained to perform deep analysis of human interaction.
It doesn't really matter though as even without some kind of super comuputer, the programming is already simplifying in the human mind. This seems strange considering the technology is developing but when looked at from the perspective of competitive congnitive artefacts combined with mass social interaction, quite straight forward.
This doesn't require a very complex system to manipulate. If we look at the last US presidential election, many of those who voted for Trump were manipulated. Many of those that voted for Clinton were too. As were those that blindly supported Sanders. Everyone was manipulated in different ways.
Everyone always is, as the different layers of programming that filter the world and organizeit into an understandable analysis for the mind, is triggered at different levels for each person. One input gets two results from two people. One can be pro, the other con, positive and negative, left and right.
This polarization makes it simple to create resistance between groups when two conflicting layers are compared but the resistance can be lowered when two layers with a similar response get compared. The hierarchical nature of the layers means that two people could disagree with a lower level but remain amicable due to similarities in a higher layer.
For a very simple example, if one person likes red and the other blue and that is all the information given, that is all a judgement can be made upon and can cause conflict. But, if there is harmony in their mutual support for a football team, they can put aside the difference of color.
The football team may be a good example of framing. Two supporters of the Dallas Cowboys, sitting at a game dressed in their team regalia are likely comrades regardless of whether one is black or white, republican or democrat, left or right. Move the same two to the steps of parliament and the comraderie may dissapate.
What happens when a system can compare the two across all social media, track movements, see what articles each reads and listen in on private conversations? Is it able to engineer interactions where the two are likely comrades or enemies? Is it able to build situations that will evoke predictable responses? It is very likely I think.
We see these predictable responses from humans all of the time and they are likely getting more predictable as globalization homogenizes the world views of large groups. From a meta level, with so much data pouring in, so many events witnessed, predicting how one group will react when faced with another in a particular circumstance, is quite easy. In some case there is violence, sometimes solidarity.
In Charlottesville, it turned ugly and this is quite predictable considering the framework of the circumstance. But, in the wake of something like a terrorist attack, those same opposed sides would likely ignore differences of opinion and come together to support each other. Again, predictable. Changing the event, location, framework, changes the layer that makes sense of it quite predictably for most people.
For two people to accept each other it generally takes an acknowledgement of a symmetry between points. I see me in you, you see you in me. This is common ground and it is always possible, not only when there is an enemy to face.
Hopefully, there is something in here though that inspires some thought and perhaps the development of other posts that can expand some detail. As always, it doesn't matter about the agreement or disagreement but the development of the thinking process and understanding.
Essentially, we need to make ourselves unhackable and the only way to do that is to build deep understanding of how we work and move. By removing conflict between and becoming aware of the decentralized filter layers, we can better see and consider instances that do not mesh with our position.
There is anther way of course, and that is to be completely without need or attachment to anything and being ultimately sensitive to the environment and the self, which essentially results in air-gapping the mind from external influence.
A few thoughts to think upon for a Sunday morning.
[ a Steem original ]
Posted with Steempress