When speaking with my friend over coffee, I described a scenario where a small group of people were given a pill and granted super intelligence and what may come about. He thinks that with such power, they will battle it out and quickly dominate the rest of the species, essentially enslaving them.
That's the problem with the unknown. And if we can't know the unknown can we scope it's impact in known terms? And even if they knew ways to peacefully accomplish their goals, with or without enslaving people, then you still have to consider their personal motivations and their morals.
You could try putting it into terms of altruism/selfishness. They could be acting with the best interests of everyone if they are altruistic, or they could be acting in their own interests if they are selfish.
But then you have to consider you might have a mix of reactions amongst this super, intelligent group. And what's the split of altruism versus selfishness. Maybe they'd see each other as the bigger threat to accomplishing their goals. They could engage in mutual assured destruction. Or maybe they divide the world up. And you end up with some utopias for all in their dominion, and some utopias from just their perspective.
But maybe what's in the best interest of all is actually to be enslaved. As a cold and intelligently calculated outcome, maybe there's too much unknown in this problem and we just cant conceive of the scenario.
Perhaps, as you explored, it's just unknowable because we didn't get that pill, but what if they know all of these things and choose to do nothing, are they immoral if they don't do anything?
Fun experiment.
It is also possible that both of these views drop away and there is just what is necessary, and what is not.
Perhaps that would be better.