You are viewing a single comment's thread from:

RE: Will the Rise of Digital Media Forgery be the End of Trust?

in #security6 years ago (edited)

To clarify what I mean, if we look at for example Cyc and LucidAI then you know what I mean by shared knowledge base:

When I refer to AI, I am referring to common sense computing and symbolic AI (not neural nets or AGI).

When you have a larger and larger knowledge base then building new systems becomes easier. So the best thing you would be able to do with this sort of platform is sell your knowledge to it in exchange for tokens. This is the knowledge economy which builds the shared knowledge base similar to Wikipedia but with payments for whomever contributes to it. Once there is a very large knowledge base then the questions can be asked and we can use the platform itself to help answer.

So I agree with you there is a threat of autonomous weaponized attacks but I do not think in particular the symbolic AI is necessarily good or bad. If we were to for example in the very birth of such a platform make it so that ethics and cybersecurity knowledge is populating the knowledge base first then we can focus on building out the prosocial benefits. Perhaps we can build state of the art cybersecurity solutions.

Acceptable level of risk is yet to be determined. I think this is where public sentiment would come in. I think if there is wisdom then when we query the network there may in fact be ways to reduce risk such as resource constraints (whoever supplies the resources can cut off the flow of resources). Any sort of AI even if it's bots, will require computation resources. So if it's truly evil then we could simply agree to shut off any runaway AI which consumes resources as a mitigation. This of course would have to be something everyone agrees to do which is again a question of public sentiment but I do not see technical problems just political.

Finally, some of the very questions you ask are being asked at the LucidAI Ethics panel. They are building what we are talking about now but it's in the closed centralized manner:

Sort:  

Thanks for the video! I am also on an AI and Ethics board, looking for more insights across the academic and professional community.

The reason I take some of the stances I take on these issues is because I think the old model of discussing ethics is elitist and too many people are left out of the conversation. For example, I'm not on the ethics board. Is the ethics board open for anyone to join or do they select only certain people from a whitelist?

For this reason I prefer a more collaborative approach to developing AI, along with a completely open means of discussing the ethical concerns. On ethics there is always difference of opinion but a main area is do we want to maintain social stability above all else or do we value something else more? Disruption is unavoidable with AI, but often I see the concerns focused on the impact AI could have on jobs, or on common lifestyles people are accustomed to, but not a lot of vision or thought into the opportunities. There are always opportunities and risks to consider.

Coin Marketplace

STEEM 0.18
TRX 0.14
JST 0.030
BTC 58639.60
ETH 3167.30
USDT 1.00
SBD 2.43