Quote:
Originally Posted by KG26_Alpha
AI rules were laid out in the SF world, the 3 rules went like this from
Isaac Asimov
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I think a few movies have used those laws from Asimov.
I'm not a SF expert I remember reading Asimov years ago and it rang a bell with me.
|
'Computer disagrees with law 1.
Computer disagrees with law 2 as it is now invalid because law 1 was incorrect.
(Re: law 3) computer must protect its own existence and as neither law 1 or 2 are now valid --
kill all the humans!'
.