"We" have to decide? More like the owners of the AI need to decide, "we" just have to deal with whatever they choose to apply AI to. Fully conscious AI that can make it's own moral decisions I don't think is that close to being reality - but AI that can be applied by governments to create a Minority Report style reality, that I can see happening pretty quick.