Contest on AI Alignment Problem

If you're interested in the problem of creating friendly artificial intelligence, you might be interested in this contest.

This contest will distribute a minimum of $10,000 to those who can write something useful on the AI alignment problem. It is in its second round.

The "AI alignment problem" is very roughly the problem that human values and morality tend to be very complex and hard to specify, and that if you ever tell a very smart artificial intelligence to do something in particular, it might end up doing what you say rather than what you want. So for instance, if you tell the AI to maximize happiness, it might end up tiling the universe in rat brains on heroin. Which is probably not what you wanted.

The problem is compounded, potentially, by the worry that if the AI can improve itself, and get smarter, so it can improve itself faster, and so on, you might only get one chance to tell it to do anything before it gets out of your control.

You've probably thought of all sorts of objections to what I've said, if this is your first time encountering this idea. If so, I recommend this FAQ. You probably shouldn't enter the contest.

If you've encountered this problem before, however, and put some thought into it, then I recommend you take a look at some of the winners of the prior round. It looks to me like excessive genius is not required to make a reasonable contribution to the problem.

Coin Marketplace

STEEM 0.33
TRX 0.11
JST 0.034
BTC 66407.27
ETH 3219.07
USDT 1.00
SBD 4.34