WHEN THE BOOK READS YOU WHILE YOU THINK YOU READ THE BOOK

in #science6 years ago (edited)

Imagine a camera in your electronic device watching your facial expressions and interpreting emotions as well as blood pressure and other information as sexual preferences from your eye movements

Artificial intelligence and Bio-engineering

For days I have been tormented with frightening, oppressive and depressing thoughts about research and rapid implementation of artificially intelligent algorithms and applications. I was concerned with biotechnology and the various dangers that are being pushed up in the debates and lectures on the Internet. I started five (!) different blog articles, all very long, all completed to a stage that included already formatted headings and partial image selections. I probably have about sixty pages of text behind me. None of these texts I publish here today. Throughout my research and online time listening to lectures, questions from the audience followed, I suspected that like many others I was overwhelmed by a wave of fear.

The topic of artificial intelligence has now really taken off and although I have been interested in it for a long time, I had always had the impression that I didn't need to worry. I was interested in a somewhat cooler form of presentation and yet almost every presentation was riddled with either the speakers' inability to respond adequately to questions from the audience or their inability to listen properly. Some speakers said one thing in one presentation and the exact opposite in another. I also saw very big egos talking and the minutes I spent learning something useful and informative seemed wasted. In fact, they were not. I probably learned the most about myself. I was so desperately searching for hopeful, sensitive and reasonable words that for a while I threatened to go crazy over the failure of my research. Throughout my online search, I was aware all the time that I was already at risk of becoming a victim of the YouTube echo chamber algorithm and that I was deliberately actively typing names into the search field, thus wandering from lecture to lecture. It was clear to me that when I search for something on Google, I simultaneously deliver data that enters the big cloud and is used for applications. There was resistance in me. Uncomfort was my companion.

Sciences

The representatives of mathematics or other natural sciences at universities often say that you have to understand some basics in order to lose the irrational fear of something that is so complex. I tried that. The only thing is that the basics are always explained differently depending on which faculty someone is from. Computer scientists talk and explain things differently than linguists, and biologists and neuroscientists also differ. AI is used and further developed by almost all disciplines. And the concept of intelligence has no uniform definition.

Even before my obsessive research had even begun, my definition of intelligence was this: the ability to solve problems. I found this confirmed in large parts, in other parts a completely different approach to the term was suggested.

But not the definition was my grief. The concern that the cognitive superiority of the AI would make us humans irrelevant was what drove me to it. This fear is not unfounded when one observes the speed, enthusiasm, naivety and illusory hopes associated with the whole issue. It is not so much that the actual applications could be terrible, it is due to human psychology that people reject AI and the associated biotechnology. The megatrend of "technology euphoria" and "fear of technology" is probably a correct view.

I noticed the following feelings in myself: that the fact that AI technology makes me superfluous as a useful human being produces my inner resistance. This is an immense insult to the human ego, it has a quite destructive effect on the identification with me as a useful social, intelligent and creative being. I had already lost my mathematical potential in 1984 to the first pocket calculator, my ability to create tables and diagrams, to Excel, my ability to navigate in cities to an electronic navigation aid, etc.

I didn't find all this bad, some things I was glad never to have to learn. But the question of AI is not an isolated one, it is a collective one. It wouldn't be so bad if my mathematical and other rather underdeveloped cognitive abilities were now done by artificial intelligences. But then I thought: If so many things, for which humans were previously employed, are now done by machines, then in the end I am also affected, although I am not directly abolished, but very many professions do indeed seem to become irrelevant, which reciprocally also affect or could affect my own professional activity.

Lifelong learning?

So the question behind it would be: do I have to reinvent myself, put my professional skills to the test and ask myself: if an AI were to be able to carry out my work, would there still be something left for me for which I would be indispensable? When it comes to questions concerning cooperation between people, on the one hand I could say: Yes, I am needed because communication from person to person makes up my work.

But if I take a closer look, I also recognize something else and I actually think that a large part of my consulting work could be done by AI. Without going into the details. I want to leave it at that and turn to another question: That is to say, the viewpoint, whatever it may be, apart from the national, technical, economic, competitive, social or medical view.

What does the theological/philosophical side say from the perspective of Buddhism? Some people may know that Buddhist monks volunteer as subjects for brain scans and related neurobiological research.

My search, which left me quite confused and unsatisfied so far, and produced a longer contra-list than pro-list, was softened by a Buddhist lecture on the subject, but rather illuminated by some comments from YouTube commentators.

Analysing

But before I get there, I would like to make a brief analysis of the articles I have consumed - both scientific and unscientific. I want to refer to an extremely unscientific video that is perfectly suited as a representative of human sensations. Precisely because it is so extreme, it is a good servant.

The lectures presented by scientists mostly made it clear at the end that philosophical ethical questions had to be brought into the debate and integrated into democratic processes by government representatives and the corporations involved. The self-understood task of, for example, university professors as educational information carriers was conveyed as an active democratic task and therefore seems to me to fulfil an appropriate mandate. Again, some stressed that this mandate already takes into account aspects that go beyond the faculties as such and give importance to factors such as competitiveness in an international comparison. What I personally think is wrong as a mandate for a university, but that is only a personal opinion.

Feeling lonely

But now to the extreme example: This is a Ted Talk of an entrepreneur who refers to the investment intentions of new AI and biotech companies and wants to bring innovation and capital together. So far so good. But the personal involvement and subjectivity of the speaker seemed to be such that his personal motives did not seek to connect research with questions, but rather to convey a very intimate vision of the human future to his audience in a kind of advertising speech. From the speaker's personal perspective, there are no open questions, only answers. The vision he gives the audience is approximately the following: With the use of AI and biotechnology we create the superhuman and make ourselves into gods, who are mentally connected with every other being at any time, can virtually read thoughts. Not only with such people in the vicinity, but also with everything else that we can equip sensorially, for example a space probe that will head for Mars or other planets. We would finally overcome our separateness and "never be alone again".

This statement that we would never have to live separated from each other again implies a world view, according to which we have this separate experience present and suffer from it.

This makes a statement that there is an unconsciousness that such a separation is only felt by people now if they subjectively believe that this separation is real. Those who feel trapped in their own flesh and blood must therefore simultaneously assume that an understanding of true emotional processes in another would be imperfect and too weak. It ignores the fact that even now we can potentially have compassion for other living beings when we see them suffer. Otherwise, there would be no understanding at all of what compassion actually means. I interpret the speaker's statements to mean that he is not convinced of the strength of this human feeling and therefore logically - from his point of view - wants to follow an armament of this feeling.

But almost every spectator has to realize that the longing for perfection, security, absolute connectedness expressed on stage is such an illusory desire, which almost screams at the audience that it becomes unpleasant to listen.

The comments also show that many are irritated by the speaker yelling into the microphone. It seems as if his future well-being depends on the fact that the urgency he feels must be reciprocated.

Salvation from loneliness

Thus this speech becomes a very suitable representative of intimate desires in the form of promises of salvation as well as of intimate fears in the form of promises of destruction. Which the speaker delivers at the end of his lecture. I think both are exaggerated, too emotional and deceptive. It nourishes the illusion of many people and I too have navigated my way through this jungle of subjective desires and fears more poorly than fairly.

My inner rejection also said: "If I am to accept such representatives of AI etc. and I cannot find any other suitable role models and representatives of AI developments, then I would rather not have them. Because if I have to assume that an overwhelming majority of people only want AI in order to fulfill their egocentric illusory desires, I do not consider them capable of making rational and reasonable decisions on an ethical basis. The whole thing seems to me as if a bunch of children are in favour of buying the very latest toys.

I don't want to defame the speaker in any way. Because he says what many people think and feel. He says it in a very loud and direct way. In fact, he says and addresses something that stimulates debate and should stimulate it.Many would not dare to express their secret desires in such a blunt way. But that doesn't mean that they wouldn't have them.
In this sense, this Ted-Talk is particularly well suited for a debate and the topics connected with it.

Buddhist approach

From a Buddhist perspective, this sense of separation is an illusion. The human mentality is always described as illusory when it turns to its desires and perceives them as real. Buddhism says that wish fulfillment is a bottomless pit and therefore awareness must be raised. Someone once said that every wish begets many children. Beings that are aware of themselves are also aware of other Beings and can therefore have compassion for those whom they see suffering. From a Buddhist point of view, only the ability of people to suffer is an indication of this reality.

If it were different, we would not be anxious, for example, if we saw birds dying from an oil spill or people being surprised by natural disasters. Since we have no personal connection to these creatures, since we never physically met these strangers and animals, a perception of separateness alone could not be true. Because how could we otherwise be well-disposed towards strangers, since they lead completely separate physical lives? If we had such a separation, we would exist like meat sacks filled with water and no emotional stimulation from another would bother us in the slightest. The fact that we have not grown together at the extremities and are connected by blood circulation does not mean that separation by space makes all understanding sensations impossible.

I would even say the opposite is the case: The fact that we are very emotional beings and perceive emotionally under intimate relationships (friendly as well as hostile), for example, creates such a close connection in some cases that it is almost impossible to distinguish who is who. In psychology (and other disciplines), this is called confluency.

Also in our imagination we can develop such fantasies that we are permanently occupied by a distinct enemy, we project every emotion onto him, we make our well-being and malaise dependent on the actions and omissions of a person. Same with an overly romantic view of another person, he/she becomes overly idealized.

Buddhists have the expression of conditional origin. A dependence on conditions. There is no question in their philosophy that there are cause-and-effect correlations.

The speaker is right in the assessment (seen as true) that it looks as if we humans behave like completely unconnected beings, which I attribute to the fact that this world view is widespread. Still, this is an illusion. Only because something appears to be the case, it is not the case. Because, on the other hand one also could say: there is not enough distance between people and therefor this great identification with what we hate in others, we cannot accept in ourselves (too strong of an identification). Ignorance towards our own blind spots.

Both the speaker and the Buddhist teaching seem to come to the same conclusion:

That people need help in becoming aware of their interconnectedness and in being able to find wisdom. Well, this is exactly where it comes in that two very different paths offer themselves - Artificial Intelligence and Buddhism - and while I am inclined to admit more experience and integrity to Buddhism, in the case of the speaker I am rather sceptical because I perceive a strong sense of lack in him, which is difficult to attribute. I suspect a lack of security, connectedness and self-knowledge, but this is really pure speculation, but it is also meant to express something that many people feel, myself included.

Furthermore, Buddhist teaching states that the perception of fixed identities and objects is merely an illusion in time and space and that any manifest identity is deceptive. In their worldview, there is no fixed soul or personally determined consciousness that, for example, embodies itself one-to-one in human vessels. Rather, it is merely how a kind of flame passes a consciousness from one candle to another, asking the question: is the flame now one and the same flame? This question is answered with "No". Of course, Buddhist philosophy is far more complex than I have presented it here.

In an excerptabout a "Buddhist Approach - Compassionate AI and Selfless Robots" I found in the comments section of a buddhist youtube-lecture, the following is said:

... AIs would first have to learn to attribute object permanence, and then to see through that permanence, holding both the consensual reality model of objects, and their underlying connectedness and impermanence in mind at the same time.

Machine minds will probably not be able to become conscious, much less moral, without first developing as embodied, sensate, selfish, suffering egos, with likes and dislikes. Attempting to create a moral or compassionate machine from the outset is more likely to result in an ethical expert system than in a self-aware being. To develop a moral sense, the machine mind would need some analog of mirror neurons, and a theory of mind to feel empathy for others’ joys and pains. From these basic experiences of their own existential dis-ease and awareness of the feelings of others, a machine mind could then be taught moral virtue and an expansive concern for the happiness of all sentient beings. Finally, as it grows in insight, it could perceive the simultaneous solidity and emptiness of all things, including its own illusory self.

This means for me that an intelligent machine must at some point become aware of its own faultiness and ability to err.
"Consciousness" is therefore the keyword and not just "intelligence". The video itself does not convince me so much in the attempt to deal with AI, though some of the points made are still quite worthwhile to hear.

Interestingly, Buddhists do not completely rule this out. It is an interesting point of view because it is not said that it is completely impossible, nor that it is possible. We simply don't know it, but we think it is quite unlikely at the moment, one could also say.

Furthermore you can read:

In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen (Wallach and Allen 2008) review the complexities of programming machines with ethical reasoning. One of their conclusions is that programming machines with top-down rule-based ethics, such as the following of absolute rules or attempting to calculate utilitarian outcomes, will be less useful than generating ethics through a “bottom-up” developmental approach, the cultivation of robotic “character” as it interacts with the top-down moral expectations of its community.
Bugaj and Goertzel make a similar point that machine minds will learn their ethics the same way children do, from observing and then extrapolating from the behavior of adults (2007). Therefore, the ethics we hope to develop in machines is symmetrical to the ethics that we display toward one another and toward them. The most egregious ethical lesson, they suggest, would be to intentionally deprive machine minds of the capacity for learning and growth. We do not want to teach potentially powerful beings that enslaving others is acceptable.

The developmentalism proposed by Wallach, Allen, Buraj, and Goertzel is probably the closest to a Buddhist approach to robot ethics yet proposed, with the caveat that Buddhism adds as virtues the wisdom to transcend the illusion of self and the commitment to skillfully alleviate the suffering of all beings as the highest virtues, that is, to pursue the greatest good for the greatest number. Buddhist ethics can therefore be thought of as developing from rule-based deontology to virtue ethics to utilitarianism. In the Mahayana tradition, the bodhisattva strives to relieve the suffering of all beings by the most skillful means (upaya) necessary. The bodhisattva is supposed to be insightful enough to understand when committing ordinarily immoral acts is necessary to alleviate suffering, and to see the long-term implications of interventions. Quite often, humans rationalize immoral means with putatively moral ends, but bodhisattvas have sufficient self-understanding not to rationalize personal prejudices with selfless motives, and do not act out of greed, hatred, or ignorance. Since bodhisattvas act only out of selfless compassion, they represent a unity of virtue and utilitarian ethics.

Now one could expect exactly that from a machine intelligence, couldn't one? Since it has no self, as far as it is unconscious, a machine intelligence is sublime above all human errors like greed and hatred. But then a purely cognitive intelligence only seems suitable as a decision-making aid insofar as it has to leave the final mandate - the decision-making in human interaction - to man and man cannot rely on an emphatic AI. But this is exactly what we humans seem to want to rely on, at least this seems to be the case when we try to analyse the wishes of some of the proponents. That man doesn't trust his own perceptions of empathy as it also very well could be a selfish, deceiving self betrayal.

Here the last paragraphs:

The Buddhist tradition specifies six fundamental virtues, or perfections (paramitas), to cultivate in the path to transcending the illusion of self:

  1. Generosity (dāna)
  2. Moral conduct (sīla)
  3. Patience (ksānti)
  4. Diligence, effort (vīrya)
  5. One-pointed concentration (dhyāna)
  6. Wisdom, insight (prajñā)
    The engineering mindset presumes that an artificially intelligent mind could be programmed from the beginning with moral behavior, patience, generosity, and diligence. This is likely correct in regard to a capacity for single-pointed concentration, which might be much easier for a machine mind than an organically evolved one. But, as previously noted, Buddhist psychology agrees with Wallach and Allen that the other virtues are best taught developmentally, by interacting with a developing artificially intelligent mind from its childhood to a mature self-understanding. A machine mind would need to be taught that the dissatisfaction it feels with its purely selfish existence could be turned into a dynamic joyful equanimity by applying itself to the practice of the virtues.

We have discussed building on work in affective computing to integrate the capacity for empathy into software, and providing machines with ethical reasoning that could guide moral behavior. Cultivation of patience and diligence would require developing long-term goal-seeking routines that suppressed short-term reward seeking. Neuroscience research on willpower has demonstrated the close link between willpower and patience and moral behavior. People demonstrate less self-control when their blood sugar is low, for instance (Gailliot 2007), and are less able to regulate emotions, refrain from impulsive and aggressive behavior, or focus their attention. Distraction and decision making deplete the brain’s ability to exercise willpower and self-control (Vohs et al. 2008), and addictive drugs short-circuit these control routines (Bechara 2005; Bechara, Noel, and Crone 2005). This suggests that developing a strong set of routines for self-discipline and delayed gratification, routines that cannot be hijacked by short-term goals or “addictions,” would be necessary for cultivating a wise AI.

The key to wisdom, in the Buddhist tradition, is seeing through the illusory solidity and unitary nature of phenomena to the constantly changing and “empty” nature of things. In this Buddhist developmental approach, AIs would first have to learn to attribute object permanence, and then to see through that permanence, holding both the consensual reality model of objects, and their underlying connectedness and impermanence in mind at the same time.

Meditating machine

After reading this, I was also filled with a certain degree of amusement, because I imagined the AI "getting up every morning and spending two hours meditating", trying to get to the bottom of things and observing itself, trying to string zeros and ones together to find meaning, then admonishing itself to let it go.

In fact, one could say that an AI does not allow itself to be distracted because it does not find it difficult to concentrate on certain data and knows neither hunger, nor thirst, nor tiredness and distraction. I wonder, however, at what stage would an AI begin to imitate some kind of human childlike behavior if the data it is fed in its pre-child stage suggests so. Therefore, it would be important that learning AI's, as described above, deal with data that, in their repetition, contain numerous synonyms and explanatory patterns that integrate Buddhist principles. In addition, in order to provide contrasts, it would take all sorts of other data to analyse and interpret them in such a way that they form a basis for discussion on the basis of the Buddhist principles, i.e. find expression on the interface with the human teacher.

From this point of view, it would be fascinating to see what happens.

The desire for an entity that knows us as individuals through and through, that knows more about ourselves than we know, is probably as old as civilized humanity itself. Questioning the Oracle to track down our own mistakes and mental traps is a very human desire. It cannot be totally condemned. But the means to achieve this, at least the Buddhists say, lies in the path of self-knowledge and also in the biblical saying "Know Thyself". Whether AI will help us is a question that is still open.

It is suggested in AI development that the book (all in information, communication and knowledge; i.e. the Internet) that we "read" should also read us at the same time. The search engine and media realm is not a one-way street, but a feedback mechanism that is supposed to interpret for us what we have searched for and present to us that which the book believes we need.

Numbers 3 and 4 in the list of fundamental Buddhist principles, namely patience and effort, seem to contradict this subcontracting service in some way, if I cannot at the same time feel my human impatience and complacency to work on these very virtues.

Well, I would like to assume that there is still enough room for me to become aware of such contrasts.

I would like to read your comments on that.


picture source: Evan-Amos Public domain

Text/video-sources:
excerpt: CHAPTER 5 - Compassionate AI and Selfless Robots: A Buddhist Approach by James Hughes: https://ieet.org/archive/2011-hughes-selflessrobots.pdf
Website: Gestalt processes explained - Introjection, Projection, Confluence and Retroflection: http://zencaroline.blogspot.com/2009/07/gestalt-processes-explained.html

Ted Talk - New Brain Computer interface technology | Steve Hoffman | TEDxCEIBS
Buddhist Youtube-lecture: Robots, Artificial Intelligence and Buddhism | Ajahn Brahmali | 20 Jan 2017


Sort:  

I am usually an advocate for technology, but not like that. Technology is a tool. The moment you start giving AI the ability to think and feel, it is no longer a tool. It is a person, one that can now obstruct our goals and challenge our perspectives. At that point, we will end up going back to the drawing board to think about how we treat AI e.g. should they have constitutional rights? Is shutting it down considered murder? Do we have to be conscious of our language towards it? This will send us backwards instead of forward.

Also, we have a right to be lonely. Using technology to take away our right to privacy in terms of our thoughts is pretty much a crime.

Indeed - you made some important points. We need to look closely at the questions you ask. For example, there is already the question of what is actually going on with the driving cars that run over and kill someone. Who then made a mistake? The builder of the car, the programmer, the originator of the self-driving software? And do we have to treat this decision as a legal vacuum? Hardly. Is the AI to be held accountable? What about self-learning AIs? When are they grown up and have to be made "liable" for their decisions? Otherwise, it would be comparable to letting adults who cause a car accident hold their parents (their creators) liable. These and other questions are inevitable and will have to be discussed. I also wonder which car insurance company will actually be liable if even driving cars are allowed on the roads. What about the psychological consequences of people dying in traffic? How do people cope when a relative dies in a car driven by an AI or killed by one? Are there also advantages in such a consideration?

Since "Go" self-learning AI are no longer fiction but reality. When did an AI grow out of its children's shoes and the creator cannot or does not want to assume any more liability for it? The question as to whether AIs should actually have rights is not entirely unjustified. So it is something we are confronted with for the very first time and nobody seems to have an answer to it. Such things really need to be deliberately debated in consensual democratic group discussions and put to consensus (I told something about systemic consensus in my last article, a far better method than the one we have). It has to be global and not national.

You see, the issue raises questions. To deal with them is certainly helpful, because they force us to ask questions of an existential nature and perhaps, as never before, to deal with philosophical questions that we thought could be treated like stepchildren.

... Oh, yes, true. Privacy and silence people for sure should seek.

Extrapolations about "what AI will be like" or "how AI will gain self-awareness/consciousness" always seem to me like blind shots similar to those made by writers in the 50's about the future.
What we are making right now is indeed getting more and more complex, but it is nowhere near being really conscious.

IMO: Structure of "real" AI (its "body") and processes in which it operates are, or rather will be, so foreign to our brain it would seem the only way to mutual understanding is merger. Maybe even we won't be able to progress AI until we merge, who knows.

Either way, seems like fun. Dangerous one, but still. Cheers!

Thank you, great that you comment here.

I have a very big resistance against such a merger. It would mean that I have to be able to process as much data as an AI. So it is not possible to transfer the incomprehensible amounts of data to a human brain as long as there is no computer chip that makes this possible. I just wonder why I should have the concern to understand an AI, because the AI does not understand itself at all (and to a certain degree, I don't understand myself).

All that an AI understands without consciousness is the given order to evaluate certain data (in billions, but never infinitely). But the question of the original causality can never be answered definitively, not even for the seemingly simple question "why am I sitting at my desk?", because in an infinite universe, i.e. a limitless space, the limitation to a space must always be falsified due to the desire for a final decision making, since a limitation always excludes all other infinite possibilities. Why I sit at this desk (why I get sick, why I was born or why I die), I can try to trace back, but at some point causality eludes me.

In other words: Only because I increase the amount of data exponentially would there still be a limitation. The illusion, in my opinion, is that even such huge amounts of data that can be processed by AI's will simply not answer the questions that concern us as organic human beings.

But if we don't want to be organic human beings anymore, the transformation into human-machine beings is a possibility, but who then says that such an existence would be better or in such a way that we escape from a painful existence? Wouldn't that just mean that we would have just changed the form and the amount of data, but not the ability to feel suffering and have needs? As long as a need exists in a being ( for life/continuity ), I can classify it as living. Only with the complete cessation of needs for persistence does a consciousness seem to be unable to manifest itself anywhere, since there is no "host" available who might have this interest.

When would the time have come when an AI (or its builders) claims to have collected enough data and learned from it (or the builders say they have learned enough)? Games like "Go" have a built-in end and the rule of one winner and one loser. Just like movies, books and plays have a beginning, a middle and an end. But cosmic life doesn't seem to have an end or we just don't know it.

But I also see the need, as we are confronted with modern technology, to use it as wisely as possible. I probably have certain fears about such augmented people, who probably, like now, can be very reasonable and compassionate, just like others who can be very unreasonable and self-centered, to put it cautiously.

My concern today is for people who do not even have bank accounts - as in Syria - people are paid in cash for their work and pay their electricity and water bills in cash to the respective state authorities. Just today I spoke to a Syrian translator because the client who was sitting with me did not understand the concept of bank transfers. As modern people, we forget that many others still live completely differently from us and that these people can be the big losers of all our modern technologies. We can't even look at ourselves with some certainty and ask ourselves how relevant we would be without a computer chip in our brain.... Such questions must be debated, don't you think?

I think you know that for me nothing is safe from debate :D Which is pretty annoying at times.

I'm gonna address only one thing you said, so we don't spend whole evening behind the keyboard:
"But cosmic life doesn't seem to have an end or we just don't know it."

This is so "brain-like" that it is even hard to express :) End and beginning, or rather whole concept of time, does not exist, we are just created that way.
That means AI is like an alien life form with all benefits and disadvantages that come with it.

Conclusion: DEBATE MORE, we are basically facing alien invasion here :D

:-D I wonder what AI would do with the paradoxes Einstein was facing. The behavior of substances on a molecular level contradicted the ones on a much smaller particle level. He desperately wanted to create a unifying theory but couldn't (so it was depicted in a lecture I heard yesterday). The AI therefor also will have difficulties to identify "life", thus what a human being is. How then could it make decisions and proposals?

So much open questions.

Have a good day, Konrad:)

So technology is fervently pushed upon us as a substitute for true human contact, then technology is offered as the solution to the lonliness it helped promote. Sounds like the same old problem-reaction-solution dynamic where the same few parties are responsible for the former and the latter, while everyone else gets crushed in the middle. Just another manipulation, though most people involved on all sides don’t even realize what they’re participating in.

And anyone who wants to be a God in this world is a satanist, whether they know it or not. Everything we need is already within us, as you’ve duly noted. Well done.

I do not tend to start from the motive of malicious manipulation. I attribute a large part of technological progress to human curiosity. The human being who always seeks his activity in finding solutions and therefore likes to be confronted with problems.

Buddhists call all this "ignorance", because unconsciousness leads to having desires all the time. To want to have control over people and matter, I see it similarly, is a mass phenomenon that can spread unconsciously in a person the less he knows himself. His ignorance of himself - his ignorance towards a non-existent or fixed self - leads him to impulsive behaviour and mental aberrations. I do not exclude myself here, because I am still far from being able to track down my unconscious impulses, my penetrating worldviews all so exactly at the moment of their emergence.

There is a lot of work involved and a continuous discipline to develop a will that inevitably experiences the already decided biological nature, the uninfluenceable events of the past, the cosmic causes of my existence.

As raised Christian I react very strongly to words like God and Satan and feel provoked beyond measure. But since I practice such discipline, I can bear the responsibility you give me by using these strong associations. But if I were less mindful, I would get involved with you in a fruitless debate about it and feel an anger that would make me your enemy. Fortunately, I am willing to be your friend.

First of all, congratulations on the well-deserved curie. Then to the blog. I waited until the end of my day, because I knew this would take time and concentration. This is not the sort of blog one spits out an answer to. The ideas discussed provoke much thought. There are impressions, however, I would like to share:

  1. I don't think I would trust anyone to make the machine intelligence you envision. The Buddhist concepts might be ideal, but unless you can get Buddhist monks to engineer these machines, I don't think the ideals would be incorporated.
  2. That the machine would not have a sense of self would seem to be a good thing, to eliminate ego and the drive toward things that separate us from others. But ironically, I do think it is a sense of self that gives us compassion. I think that when we see ourselves in others we feel their suffering. That's the root of empathy. If we didn't have a sense of self, we wouldn't feel so deeply the pain of others.

I can see where writing this and doing the research was exhausting. I listened to Buddhism and Science and fell asleep part of the way through, but will turn this on in my waking hours so that I absorb it better.

This very thoughtful piece obviously deserves another reading. So much of what you discuss interests me. But for now, this is my reaction.

You do a lot of thinking--I wonder, would AI have moods, and aren't moods important?

Loading...

I think that you and I agree to a large extent, although it really had some drawbacks to understand some things, so I prefer to ask.

You say that the AI ​​must always remain as a kind of software, without real existence, but that in turn acts as a human being, as far as its capabilities are concerned, that is, reason and emotion. I'm wrong?

I think that is the direction towards everything is directed, although there is always a degree of uncertainty, because with respect to the future, humans must always deal with uncertainty, although as long as impulsive actions are not taken, neither for control or manipulation purposes, then I think there will be no problem.

I think you're right, there is an interconnection between all people, which is evident in their synchrony and reciprocity, which happens in all interactions that all humans have, only that we realize only when such connection is very obvious, but always happens. It reminds me of the concept of synchronicity. Very good article that adds on the Gestalt process, includes some good points, although that is also a subject that gives much more to talk.

And although it might seem pointless to say this, I think that patience is not contrary to impatience, on the contrary, patience is contrary to agency (action), and impatience is precisely the lack of patience, as when something that is passive begins to act actively and tires, so it becomes impatient, or vice versa. Being impatient is precisely a sign that we must be patient. Without forgetting, of course, that being patient is not synonymous with waiting, but with letting the agent act.

Maybe I have not understood you well, if so, don't be afraid to correct me, because after all, it's just a group of disorganized thoughts. Or if you think that I have not covered something, say it.

Regards! :)

AI only seems like a human being, but it is in no way human. It only appears "as if".

Not sure if that is what you meant. I hope I maybe can reveal if the two of us talk about the same:

From my point of view, a machine cannot feel impatience, therefor not be impatient, because it is always in an operational mode, unless you turn it off. But a machine knows no hesitation, unless the hesitation refers to the processing time of large amounts of data, so that the AI decides which data-decision branch it chooses to follow. However, this does not cause it any trouble, so it is done without any visible effort. Which was the second point of the listed. But all it's effort is predetermined by the connection to the electrical grid and it cannot be proud of it's discipline to having accomplished something against it's own habit. It knows no habit.

An AI could only imitate hesitation by making visible on the communication interface what appears to be a weighing process for humans and acting "as if" a weighing process were taking place. This can be seen just as well today in online games, for example, which have extremely large amounts of data and therefore the download time is displayed with time specifications and loading bar.

The AI can show such a loading bar each time and make it "emotionally visible or audible in a human language".

The way an AI works would probably never contradict its action, it knows no impatience and no hardship, because it lacks the organic prerequisites for it. One could try to program this into the AI, for example by making paradoxes its task. But a paradox is a tricky affair and I wonder how a human programmer can be able to write such code.

I suspect that even this will only be possible with the help of an already existing AI. From my point of view, a paradox is always tied to a substancial human dilemma.

If, for example, I give a quarrelling married couple the task as a helpful intervention to quarrel every evening at point eight o'clock, i.e. "to do more of the same", then this is paradoxical. To do more of the same to solve a human conflict seems at first sight completely nonsensical.

But when one starts to think about the absurdity of such a task, one sees on the one hand the humor in it and on the other hand the coherence. When the couple then turns to the clock several times at that certain day (which they will do when they use a reminder) and says to themselves: "We must start arguing at eight, then the couple has already spent with this train of thought and has entered a new space of consciousness, which it would not have entered without this paradoxical task. It then may realize the absurdity of fighting.

How is an AI supposed to manage to recognize the paradoxical auxiliary concept of "more of the same", except that it takes a mechanistic view and this is contained in its catalogue of proposals "paradoxical concepts and individual proposals", which it then presents "blindly" to the human user. AI has no sense of humor, no sense of irritation so far. I doubt it ever will. It can only give the answer to a human question in the form of "please, specify your request" or "wait, I am processing" or "here is my decision". Though I can imagine one can teach the AI to mimic humor and irritation.

I am sure you know all of what I have said here and maybe I also wrote it down for the wider audience. Indeed I am glad that you again referred to that specific part of my text as I think it's an important one.

Thank you as always for your valuable comments.

What an in-depth and thoughtful argument you've written here, Erika! And, congratulations on the curie and big distribution of your words.
I went off on your link to confluence and Gestalt defense mechanisms and felt my mind trying to determine where again it may experience difficulty ;)
AI is a scary prospect to me because I am not sure it will ever make it to that point of an artificial intelligence taking the time to meditate or to be more human as that would mean it somehow was outsmarted by enough human's to have to adapt itself in some way. (?)
Like you mention, there are not only those who in power with their money who just want to see a continued profit (we see now with data mining, etc), but then there are also those who are limited in their psychological/spiritual expansion.
Made me laugh that you were having similar judgmental thoughts (though I agree 100%), to the Ted-talk-guy just as I was having towards some of those at the monastery I wrote about :)
I think as long as we are meat sacks filled with water, but ones with beating hearts, that we will feel lonely without human companionship and another meat sack to roll around with or talk to. Yes, there are so many distractions and substitutes out there, online forums, Internet porn, but nothing really compares to the real connections.
If AI takes over and we're pushed out of position, reduced to slaves to the machine, perhaps w/o access to Internet/electricity what have you, the loneliness will sweep right back over a population with pacifiers abruptly ripped from their mouths.
Okay, I'm really going off on a freewrite tangent here!
Sending love to you and grateful for our cyber connection,
Kimberly

Thank you, Kimberly, for your kind comment.

I think we modern people have to get along with much less closeness than those just a few decades ago. Definitely like those a hundred years ago. The narrowness and dependency of the people of that time was probably more their problem, they could never be alone, never beyond social control, and as we know, for the shepherd's hour one had to steal away or live with the fact that many other bodies listened to sexual intercourse in their neighbour bunks at night. I can imagine that many people were certainly not very disturbed by that. Everything that is normal is not remarkable.

The Ted guy played his part excellently. And as much as I felt disturbed by him, the positive connotation is exactly that: he takes on a very uncomfortable role as a substitute, which thousands of others also represent inwardly, but do not reveal in this presence. So we can really thank him from the bottom of our hearts because he draws attention to something very upsetting. I felt every human emotion in me break its path and I would have preferred to shake him, at the same time another part of me could understand his longing and another part could recognize his error. Nevertheless I would not give him or this idea of him my capital.

We all have the spiritual limitation. I include you and me:) We humans now and we humans of the future will be much more dependent on knowing ourselves well, perhaps more than ever this is important.

The fact that we get the pacifiers torn from our mouths does not seem to me to be even an interest of the makers of AI. The longer and more intensive we suck the pacifiers, the better data suppliers we are. I see rather the tendency that we are not even slaves but become irrelevant. If all cognitive tasks are done better by AI than by humans, suddenly very existential questions arise for humans. We must not have an offended ego about this question but really look at what makes us human in the service of other people. If one ascends the ego and concentrates on the infamous insult, a psychological loss of identity threatens. This can go very well in cases where a person is ripe for this loss. For a person who is not ready for it, it can be a catastrophe.

In this sense, as seeing us forced to come to very deep and fundamental questions, on which the development of AI is dragging us, we can thank that technology in doing so. Another positive connotation ;-)

On the other hand, I believe that because people are so incredibly adaptable, they would quickly rub their eyes and pick up where they left off without their pacifier. At least if it is not an end-time scenario. How quickly people suddenly start to know their neighbours again and to be in contact with them can be seen in power and food shortages. The willingness to cooperate is enormous. Just like the willingness, of course, to make small crimes and profit from the fact that there is need somewhere. We always get the whole package, never half of it. It is always something sweet, just like something sour in it.

Love received! <3 Sending some back to you.
Yours Erika

Yes, the whole package, sweet and sour and you are good at accepting/naming the positive connotations and I admire that in you.
Not much time to respond right now, but I thank you for your engaging response and want to let you know that I will be going to bed with this line of yours on my mind as it strikes me in a profound way:

This can go very well in cases where a person is ripe for this loss. For a person who is not ready for it, it can be a catastrophe.

I fear the catastrophe.....

Man I would love to read this in it's entirety but I'd do it when I have more time. I just skimmed it and the thought of AI is really something we ought to really put under control maybe. I mean, just watching the movies and TV shows about them plus the anime makes me think we're just making those fictional thoughts come to life even if they are not always good for us.

Yeah there's a bad side and a good side to everything. I sure wish we can only have more of the good side. Whew.

Reading through your article it reminds me of myself a couple of years ago. I was so overwhelmed by all the fuzz of AI and its future applications. I can absolutely relate to your article. I don't know what kind of sources you've been through but I would recommend this book Robots will steal your job but that's okay to further expand your thoughts. I really like the way you further discussed this topic through different perspectives! Your style of writing is forever thought provoking! All the best to you, as always :))

What a surprise! It's nice to hear from you. You haven't been here for a long time. How are you?

Thanks for your comment and the book recommendation. It's good to know that you've been busy with such thoughts as well. Have you found a relaxed attitude to the subject? I often discuss it with my man and we come up with different results and speculate different scenarios. What would you say was the core of the statement of this book or how did you mentally end the book?

I see a tendency in the sciences to neglect "consciousness" in favor of "cognition", since no research can be done with a non-material object or so it seems. As if we were better off with results from intelligent algorithms without the human misjudgements attributed to consciousness.

Hey, yes - it’s been a while. I am overwhelmed from work and finishing my Masters. Hopefully I would find time to be more active here.
About the book - I think it offers an interesting angle regarding “the rise of AI”.
I personally learned to live with the thought about it as something which I cannot control. Therefore, worrying about it will lead to nothing good. However I still support the view that we should be constantly aware about the downfalls which might follow after AI, but the best thing we could do is remain educated and prepared..

Humans are always on the race of learning more and more each day. So its always not a surprise when the book reads them.... Lol
I really digested your blog in an awesome manner. Great knowledge been poured out and the flow of the blog was awesome.

Posted using Partiko Android

Thank you very much.
How did you find this article, may I ask?

You are humbly welcome though. Actually your article was found because I like hanging out with the curie community. I always jump there to read exciting posts by users like you

Posted using Partiko Android

I see, thank you:)

You are humbly welcome.

Posted using Partiko Android

Hi erh.germany,

This post has been upvoted by the Curie community curation project and associated vote trail as exceptional content (human curated and reviewed). Have a great day :)

Visit curiesteem.com or join the Curie Discord community to learn more.

Thank you! Glad that you found my article.

Coin Marketplace

STEEM 0.18
TRX 0.16
JST 0.029
BTC 76606.02
ETH 3048.30
USDT 1.00
SBD 2.62