Official Fulqrum Publishing forum

Official Fulqrum Publishing forum (http://forum.fulqrumpublishing.com/index.php)
-   Pilot's Lounge (http://forum.fulqrumpublishing.com/forumdisplay.php?f=205)
-   -   will ai ever get so complex as to deserve rights? (http://forum.fulqrumpublishing.com/showthread.php?t=38359)

ZaltysZ 01-28-2013 02:01 PM

Quote:

Originally Posted by KG26_Alpha (Post 495867)
Not really because they are not causing the "harm" in the first place.

If robot does not prevent the "saving" (what causes the harm) from happening, it will be guilty for practicing harmful inaction. :) However, if it does prevent the "saving", result will be a harm caused to other human being.

raaaid 01-28-2013 03:23 PM

i think now we can give FREE WILL and therefore a spirit to the equivalent of litle animals

the key is in the true random numbergenerator chips

maybe in 20-30 years some people will get a high from abusing as sentient as a human ai

this videoclips deals on that what is strange for its still far away:

http://www.youtube.com/watch?v=O6pNvtOWCSo

swiss 01-28-2013 08:06 PM

Quote:

Originally Posted by SlipBall (Post 495868)
Why does Sarah Connor live in fear :confused:

Why do you think mankind deserves to live at all?

raaaid 01-28-2013 08:40 PM

thats an strange question like why hienas or vultures are allowed to exist

i maybe so deluded as to talk with god but not so insane to believe myself him or as him as to judge

major_setback 01-28-2013 08:52 PM

Quote:

Originally Posted by KG26_Alpha (Post 495860)
:rolleyes:



AI rules were laid out in the SF world, the 3 rules went like this from

Isaac Asimov

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I think a few movies have used those laws from Asimov.

I'm not a SF expert I remember reading Asimov years ago and it rang a bell with me.


'Computer disagrees with law 1.
Computer disagrees with law 2 as it is now invalid because law 1 was incorrect.
(Re: law 3) computer must protect its own existence and as neither law 1 or 2 are now valid -- kill all the humans!'
:-)




.

SlipBall 01-28-2013 09:14 PM

Quote:

Originally Posted by swiss (Post 495898)
Why do you think mankind deserves to live at all?


Because of the laws of evolution and fire power:)... we are at the top for now. Our future is not cut in stone though, it may very well belong to the Bots one day.

WTE_Galway 01-29-2013 12:04 AM

Raaid ... watch Caprica ...

Verhängnis 01-29-2013 03:50 AM

Quote:

Originally Posted by KG26_Alpha (Post 495860)
:rolleyes:



AI rules were laid out in the SF world, the 3 rules went like this from

Isaac Asimov

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I think a few movies have used those laws from Asimov.

I'm not a SF expert I remember reading Asimov years ago and it rang a bell with me.

The most recent that comes to mind in i,Robot, some elements are based on Asimov's book.
http://en.wikipedia.org/wiki/I,_Robot_(film)

WTE_Galway 01-29-2013 04:01 AM

Quote:

Originally Posted by Verhängnis (Post 495919)
The most recent that comes to mind in i,Robot, some elements are based on Asimov's book.
http://en.wikipedia.org/wiki/I,_Robot_(film)

This issue is the also entire point of the Data character existing in SNG.

AI is also explored in many SNG episodes through various holodeck characters becoming sentient.

The Voyager episode "prototype" is also relevant.
http://en.wikipedia.org/wiki/Prototy...ek:_Voyager%29

Then you have the entire Terminator series of movie and followup TV series starring Summer Glau who played River in Firefly.

Not to mention the Matrix trilogy.

More recently the issue of AI being the enemy of mankind which was raised in BSG gets dealt with in more detail in the BSG prequel Caprica.

Plus lets not forget 2001 A Space Odyssey and the attempts of Hal to destroy the humans.

This is not a new theme nor has Raaid come up with any new ideas.

SlipBall 01-29-2013 08:38 AM

Memristor evolution

If Williams is right we should start seeing memristic memory devices in the commercial market soon, within 1 to 5 years. [3] Some skeptics claim that other technologies have more promise, like quantum computers, light-based computers, and IBM's 'Racetrack Memory,' etc. If so, then even better! But if Chua's contention that nanoscale devices automatically bring in unavoidable memristic functions, then it looks like no matter what kind of memory gets used, we will be forced to contend with memristance. So what kind of timeline should we expect if memristors do rule the computer world? No one knows but for speculation sake I think it could go something like this:
POSSIBLE INVENTIONS UTILIZING MEMRISTORS TIME1. memory for cameras, cell phones, iPods, iPads, etc. 1 to 5 years2. universal memory replacing hard drives, RAM, flash, etc. in all computer devices 5 to 10 years 3. complex self learning neural networks and hybrid transistor/memristor circuits 5 to 15 years 4. memristic logic circuits on par with CPUs and other transistor circuits 15 to 20 years 5. advanced artificial thinking brains 20 to 30 years? 6. artificial conscious brains ?7. memory and brains capable of living millions of years ?8. duty-cycle artificial conscious beings capable of interstellar travel ?


All times are GMT. The time now is 06:45 PM.

Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 2007 Fulqrum Publishing. All rights reserved.