You are viewing a single comment's thread from:

RE: Curating the Internet: Science and technology micro-summaries for September 18, 2019

in #rsslog5 years ago

Great links!

I am also not convinced of the holographic principle, as it seems to based on a flawed understanding of entropy. Information isn't lost in a black hole. It's just no longer available to observers outside the black hole. We grasp very little of what this actually means to physics, and failing to reckon our own limitations on handling information seems to be why this theory is propounded.

Regarding emergent AI, I am convinced it is useful to bear in mind how evolution has progressed on Earth, with apparently increasing cooperation developing from previously insensitive systems. During emergence, radically insensitive mechanisms should be expected to but gradually gain sensitivity and holistic response. For example, death evolved prior to the radiation of organisms that have produced extant ecosystems, because it is evident in all known lifeforms. We have a common ancestor that had evolved death, shared with all extant organisms.

Ironically, species that had not evolved the death of individual organisms have all died out. This feature of life is dramatically counterintuitive, yet ubiquitous in practice. It seems upon reflection that death potentiates evolution, and thus development, but it is notable that extant institutions are highly likely to consider their death (however relevant to the field of endeavor that may be) to be absolutely imperative to prevent.

We all want to live forever, yet emergent life is mortal. We are prone to introducing not only such bias but an unknown range of possibilities equally as inconceivable to us as programming mortality into our DNA.

This seems to me to recommend extreme caution in the assessment of AI functionality, particularly in existentially relevant matters. Given the high complexity of real world systems, the fields applicable to the AI in question may take extremely large amounts of development before nominal responsiveness to not just their field of applicability, but to the whole system of systems, upon which the state of the applicable field depends. You might think that 500M plays is a lot, but life has been emerging and developing responsivity to the real world for over 4bya, and 500mya only takes us back to the Cambrian explosion, which is but ~12% of the total extant and demonstrably has resulted from dramatic evolution of cooperative pathways applicable holistically. Clearly the experimenters crafted baseline teams and cooperative behaviour, but the enormous complexity of real world systems leaves a lot of room for bias and hubris, particularly, to prevent nominally holistically responsive actions in the AI that may only be relevant in highly particular circumstances.

Such shortcomings are therefore likely to effect unexpected and potentially harmful actions by an AI so limited in experience, and therefore development that reflects the hard limits on developer capacity. We take shortcuts, and yet butterflies do wreak hurricanes with the gestalt that emerges from their sum and the rest of the whole.

In brief, the power of AI to control important systems needs to be limited until hypercomplexity of development is demonstrably beneficial and not harmful in exigent and novel circumstances. Even then, such actions comparable to extinction level events must be expected, and able to be mitigated in emergent systems.

We don't want warlord, health, or financial AI basing it's deployments based on poorly conceived baseline underpinnings, amongst developments likely to be forthcoming in AI applications. Consider that death is predicated on the development of species, and how these features relate is inexplicable, perhaps unavoidably. Just inserting our biases are likely to result in unexpected and potentially harmful consequences, particularly existential and those apparent only in specific and unique or rare circumstances.

Thanks!

Sort:  

Good feedback. Thanks! The point you made about species that don't incorporate individual deaths (planned obsolescence) in their design all being extinct is an interesting one that I haven't heard before. That puts a new light on the modern research into reversing aging, too. And I agree that the power of AI needs to be limited for the foreseeable future, especially in high impact realms like health, military, and finance.

I don't have much of an opinion one way or another on the holographic universe. I think it's interesting to think about, but unless or until they can prove the claim, I guess the default has to be that the 3-dimensions that we think we perceive are all really there.

Coin Marketplace

STEEM 0.19
TRX 0.14
JST 0.030
BTC 64647.65
ETH 3473.91
USDT 1.00
SBD 2.50