You are viewing a single comment's thread from:

RE: Will the Rise of Digital Media Forgery be the End of Trust?

in #security6 years ago

One more thing to consider. It may be the 'wise' attackers who develop inroads to exploitation, but it then will be disseminated to less technical, basically anyone, to leverage. That's when the problem scales unimaginably.

This is one of the problems with AI and cybersecurity that I work on. Adoption of AI could be a turning point for efficiency and effectiveness across just about all facets of mankind, but if those same tools we embrace are leveraged for malice, then we have created a tool for our demise. How do we proceed where we gain from the benefits and yet still mitigate the risks to an acceptable level.

Sort:  

Well it already is the case that anyone can set up bots so that's not hard. The offense always seems to have the advantage there. The benefit of AI is that it allows for automated detection. I think it's not possible to create a tool which cannot be abused by an attacker. The point is right now only the attackers have the bots which means we get all the negative use cases without any benefits to the every day user.

I suppose you could make a case that if the every day user had access then it requires slightly less technical sophistication but even if we look at Steem and make the same argument? If we release the bots to everyone does it really make the situation worse? I don't think it would. I think it would allow everyone to contribute to pro-social bots.

An argument can always be made from the top down that keeping people poor and ignorant reduces unknown security risks. This as a defensive strategy I think is a bad one because it assumes there can be perfect defense against unknown attackers in an information context. I prefer the strategy of building for resilience with the expectation that there is no way to predict or defend against all kinds of attacks (it's cat and mouse). But we can build the sort of network where recovery from any attack is quicker.

If we have a network, with a shared knowledge base, and bots, which is building wisdom for us all, compounding on itself, then this would also include the area of cybersecurity. This also would include answer the questions on how to recover from attack or how to mitigate the risks which inevitably come from abuse of information. The reputation system for example if it is to be designed will have to be built from the current state of the art in terms of knowledge.

Just one example of a pro-social use case. If we all have bots then we can simply tell our bots to filter information and automate shopping. This would exclude a lot of the attacks (not all). The reputation economy would mean every bot which is anti-social or evil or harmful gets down voted, a bad score, etc.

Sybil attacks and botnets will still exist but the idea is that the most effective and popular bots will be the bots which people appreciate (in theory). Also in theory you can set up the bots so that for instance Alice's bots have the reputation of Alice as a verified bot and if the bot does anything bad it damages Alice's reputation. Unverified bots or bots with no reputation would be the ones to be concerned about.

Pseudo anonymity would mean I do not need to know who is behind the bot. I just need to know via cryptography that someone with a good reputation in the community is behind it. When everyone has their bots then the good and useful bots gain the advantage. When it remains a mysterious technical thing, well then I still may have access to these bots due to my technical abilities but the problem is the people who will need them the most will have to in essence pay the bots run by technical people (just like bots on Steem now).

I see no stopping markets from forming, and bots from proliferating. I think the benefits far outweigh the risks if the wisdom aspects I mention can be implemented. It might be true that bad actors can get enhanced wisdom from it but it means you and I also will get these enhancements. I don't think the bad actors will have as much of an advantage until of course they do what they always do and find new attacks. Unavoidable.

It is technically impossible to create a new technology which can vastly improve the condition of the world without any risk of it being turned against us. The Internet, social media, even the printing press, can be turned into a weapon. In fact, all of them have been weaponized in some fashion, but I'd rather have it too so I can have a level playing field.

To clarify what I mean, if we look at for example Cyc and LucidAI then you know what I mean by shared knowledge base:

When I refer to AI, I am referring to common sense computing and symbolic AI (not neural nets or AGI).

When you have a larger and larger knowledge base then building new systems becomes easier. So the best thing you would be able to do with this sort of platform is sell your knowledge to it in exchange for tokens. This is the knowledge economy which builds the shared knowledge base similar to Wikipedia but with payments for whomever contributes to it. Once there is a very large knowledge base then the questions can be asked and we can use the platform itself to help answer.

So I agree with you there is a threat of autonomous weaponized attacks but I do not think in particular the symbolic AI is necessarily good or bad. If we were to for example in the very birth of such a platform make it so that ethics and cybersecurity knowledge is populating the knowledge base first then we can focus on building out the prosocial benefits. Perhaps we can build state of the art cybersecurity solutions.

Acceptable level of risk is yet to be determined. I think this is where public sentiment would come in. I think if there is wisdom then when we query the network there may in fact be ways to reduce risk such as resource constraints (whoever supplies the resources can cut off the flow of resources). Any sort of AI even if it's bots, will require computation resources. So if it's truly evil then we could simply agree to shut off any runaway AI which consumes resources as a mitigation. This of course would have to be something everyone agrees to do which is again a question of public sentiment but I do not see technical problems just political.

Finally, some of the very questions you ask are being asked at the LucidAI Ethics panel. They are building what we are talking about now but it's in the closed centralized manner:

Thanks for the video! I am also on an AI and Ethics board, looking for more insights across the academic and professional community.

The reason I take some of the stances I take on these issues is because I think the old model of discussing ethics is elitist and too many people are left out of the conversation. For example, I'm not on the ethics board. Is the ethics board open for anyone to join or do they select only certain people from a whitelist?

For this reason I prefer a more collaborative approach to developing AI, along with a completely open means of discussing the ethical concerns. On ethics there is always difference of opinion but a main area is do we want to maintain social stability above all else or do we value something else more? Disruption is unavoidable with AI, but often I see the concerns focused on the impact AI could have on jobs, or on common lifestyles people are accustomed to, but not a lot of vision or thought into the opportunities. There are always opportunities and risks to consider.

Coin Marketplace

STEEM 0.18
TRX 0.14
JST 0.029
BTC 58132.39
ETH 3138.08
USDT 1.00
SBD 2.44