Official Fulqrum Publishing forum

Official Fulqrum Publishing forum (http://forum.fulqrumpublishing.com/index.php)
-   Pilot's Lounge (http://forum.fulqrumpublishing.com/forumdisplay.php?f=205)
-   -   will ai ever get so complex as to deserve rights? (http://forum.fulqrumpublishing.com/showthread.php?t=38359)

raaaid 01-27-2013 03:26 PM

will ai ever get so complex as to deserve rights?
 
or even if ai gets as complex as an human intelligence it deserves no right on this vein?

http://img.blogdecine.com/2011/02/ai-f3.jpg

http://www.youtube.com/watch?v=QejONbEspeM

Das Attorney 01-27-2013 04:43 PM

Well if it's ever that intelligent, then it can make it's own mind up whether it deserves 'rights'....

badfinger 01-27-2013 05:38 PM

Quote:

Originally Posted by Das Attorney (Post 495773)
Well if it's ever that intelligent, then it can make it's own mind up whether it deserves 'rights'....

IT will probably want to be a lawyer.

Binky9

Skoshi Tiger 01-28-2013 01:39 AM

It would decide our fate in a microsecond

Hmmmm! Not Cool!

tk471138 01-28-2013 02:49 AM

as far as im concerned aliens robots AI has one right when im around and that is the right to get killed by me...i hate robots aliens and AI and will not tolerate them and i suggest you people should think the same way should aliens land immediately kill them, the same with robots or AI before they get a foot hold....robots, aliens, and AI want one thing and that is to dispose of us...



the AI or robots only have what ever rights their creator wants to endow them with...

KG26_Alpha 01-28-2013 12:09 PM

Quote:

Originally Posted by tk471138 (Post 495822)
as far as im concerned aliens robots AI has one right when im around and that is the right to get killed by me...i hate robots aliens and AI and will not tolerate them and i suggest you people should think the same way should aliens land immediately kill them, the same with robots or AI before they get a foot hold....robots, aliens, and AI want one thing and that is to dispose of us...



the AI or robots only have what ever rights their creator wants to endow them with...

:rolleyes:



AI rules were laid out in the SF world, the 3 rules went like this from

Isaac Asimov

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I think a few movies have used those laws from Asimov.

I'm not a SF expert I remember reading Asimov years ago and it rang a bell with me.

ZaltysZ 01-28-2013 01:02 PM

Quote:

Originally Posted by KG26_Alpha (Post 495860)
Isaac Asimov

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The first law has a fault - a possible deadend, because that law does not cover situation in which dilemma may arise: saving one human causes harm to another, so saving violates the law, but doing nothing violates it too. Undefined behavior anyone? :-) That is the nastiest thing could happen in software.

swiss 01-28-2013 01:24 PM

Quote:

Originally Posted by tk471138 (Post 495822)
as far as im concerned aliens robots AI has one right when im around and that is the right to get killed by me...i hate robots aliens and AI and will not tolerate them and i suggest you people should think the same way should aliens land immediately kill them, the same with robots or AI before they get a foot hold....robots, aliens, and AI want one thing and that is to dispose of us...



the AI or robots only have what ever rights their creator wants to endow them with...

lol, what do you reckon?

KG26_Alpha 01-28-2013 01:33 PM

Quote:

Originally Posted by ZaltysZ (Post 495863)
The first law has a fault - a possible deadend, because that law does not cover situation in which dilemma may arise: saving one human causes harm to another, so saving violates the law, but doing nothing violates it too. Undefined behavior anyone? :-) That is the nastiest thing could happen in software.

Not really because they are not causing the "harm" in the first place.

SlipBall 01-28-2013 01:42 PM

Quote:

Originally Posted by KG26_Alpha (Post 495860)
:rolleyes:



AI rules were laid out in the SF world, the 3 rules went like this from

Isaac Asimov

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I think a few movies have used those laws from Asimov.

I'm not a SF expert I remember reading Asimov years ago and it rang a bell with me.


Why does Sarah Connor live in fear :confused:


All times are GMT. The time now is 05:40 PM.

Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 2007 Fulqrum Publishing. All rights reserved.