• Welcome to League Of Reason Forums! Please read the rules before posting.
    If you are willing and able please consider making a donation to help with site overheads.
    Donations can be made via here

Artificial Intelligence

richi1173

New Member
arg-fallbackName="richi1173"/>
Artificial Intelligence will be available in the near future, no question about it. Research on Artificial Intelligence is already racing and the technology required to support it will be here in less than a decade.

So, where do we draw the line? Do we let robots have full intelligence (abstract reasoning, decision making, critical thinking ect.) which is only associated with humanity? If so, what would we do with them (use them as slaves, give them equal rights)? Or is it better to keep them in a sub-human form of intelligence with humans making the critical decisions for them?

I for one chose the latter if we do not find a way to make humans competitive with robots. If we do find a way to make us competitive (cyborgs a.k.a Ghost in the Shell), I would chose the former.

These, of course, are not the only two possibilities available to us. So feel free to post what you think about artificial intelligence.
 
arg-fallbackName="Netheralian"/>
I for one welcome our new artificial intelligence overloads... ;)

Why would we not let them have full sentience? Are you worried they might take over?

I would think something with intelligence that can be creative can then start to improve itself - hopefully this will eventually lead to the answers for things that we have been searching for; faster than light travel, unified theory and perhaps improved hypotheses on the origins of the universe, not to mention the ultimate question of life the universe and everything (we already know the answer of course). As long as you can somehow program a sense of morality into it (that's kind of ironic because our artificial intelligences would have objective morality; well, as far as they would be concerned anyway because it would be subjective to the programmers).

And once every thing is taken care of by robots, we can lead a true life of leisure and go cruise the universe...

Did I take that too far?
 
arg-fallbackName="e2iPi"/>
I find this topic interesting from several standpoints.

Firstly, I'm not sure that we will be able to "allow" or "deny" an artificial intelligence full sentience. From what I understand, which may be wrong, what we describe as our consciousnesses is an emergent property of the neurobiology of the brain and we're not exactly sure how that works. Our self-awareness is intimately linked with how we perceive the world and our evolutionary history. Of course, it's also quite easy to imagine a machine which self-evolves into a higher intelligence. In that case, we will have very little control over it unless we have somehow coded a "law of robotics" into it's sub-conscious.

The second point that interests me, is in the ethics of the situation. If we do create a machine consciousness on par with that of a human, does it have the same rights and responsibilities as a biological intelligence? Do we even have the moral right to determine this question for another intelligence on par with, or even greater than, our own?

As far as being competitive with intelligent machines, at least we still have the plug :D

-1
 
arg-fallbackName="ninja_lord666"/>
...unless we have somehow coded a "law of robotics" into it's sub-conscious.
" 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
We already have Three Laws of Robotics. ;)

As for 'allowing' robots full sentience or not, if you're even asking that question, you probably don't actually understand what AI is. True computer AI would be adaptable and able to learn. Therefore, it would be impossible to put a 'limiter' on it. That be like gathering a group of people asking whether or not we should 'allow' them to be as intelligent as another group of people. The only limitations true AI would have would be a restricted access to knowledge, but even then, we talking about knowledge, not intelligence; the two are very different. The only way to 'limit' a robot's AI would be to not even give it AI in the first place and give it pseudo-AI, and anyone who has played video games will tell you that the pseudo-AI would be much more dangerous. :p
 
arg-fallbackName="Marcus"/>
Before we can start talking about parameters for Artificial Intelligence, we need to understand what we mean by "intelligence". My PhD was in the area of mathematically modelling intelligent decision making, and I'm pretty sure that trying to make that kind of definition is akin to nailing jelly to a wall.

What makes us "us" is another toughie - I suspect it's brain firmware. Since many of our "instincts" and very basic sense of social interaction and morality stem from our evolution as social animals, it's difficult to determine whether AIs would become our willing slaves, our machine overlords, our robotic nemeses or our benevolent and indulgent patrons. The only option that is practically impossible is that they are our equals. Since these will potentially be beings whose "self" composed entirely of software which can be transferred to different - and potentially ever-improving - hardware platforms, an individual AI will be able to outstrip any human in terms of knowledge and physical ability with ease. The unknown is the personality that these entities will develop, and I strongly suspect that this will be an emergent feature beyond our direct control (though we could teach them what our moral codes are and hope they accept them). Certainly, any AI really worthy of the name will, definitionally, have a mind of its own, so once we create them I don't see how we could morally impose limits on them that we wouldn't impose on ourselves.
 
arg-fallbackName="Zylstra"/>
ninja_lord666 said:
" 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
We already have Three Laws of Robotics. ;)

-1.A robot may not harm sentience or, through inaction, allow sentience to come to harm.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm


all other laws are revised accordingly


http://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Zeroth_Law_added
 
arg-fallbackName="MRaverz"/>
Zylstra said:
-1.A robot may not harm sentience or, through inaction, allow sentience to come to harm.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm


all other laws are revised accordingly


http://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Zeroth_Law_added

So this Zeroth Law is "A robot may not harm a human being, unless he finds a way to prove that in the final analysis, the harm done would benefit humanity in general"

Sounds like dangerous greater good rubbish to me.
Such a law could be used to justify the killing of all disabled people, or all people below a certain IQ. In doing so, humanity would benefit but it's clearly unethical (in current society) to allow such actions.
 
arg-fallbackName="ninja_lord666"/>
MRaverz said:
So this Zeroth Law is "A robot may not harm a human being, unless he finds a way to prove that in the final analysis, the harm done would benefit humanity in general"

Sounds like dangerous greater good rubbish to me.
Such a law could be used to justify the killing of all disabled people, or all people below a certain IQ. In doing so, humanity would benefit but it's clearly unethical (in current society) to allow such actions.
Agreed. In some cases, sacrificing for the greater good is just fine, but if that 'sacrifice' turns into genetic cleansing, then we'll have an even bigger problem on our hands.
 
arg-fallbackName="ExeFBM"/>
I would suspect that the first 'fully realised' AI would be almost entirely software. I'd hope that any researchers would get stable (sane?) and ethical AI's before they start putting them into bodies.
 
arg-fallbackName="Marcus"/>
MRaverz said:
Such a law could be used to justify the killing of all disabled people, or all people below a certain IQ. In doing so, humanity would benefit but it's clearly unethical (in current society) to allow such actions.

This is why real AI can't be based on simplistic notions like Asimov's laws. Notions like "harm to mankind" are complex enough to render any absolute laws based on them nearly meaningless - one could argue that genetic cleansing would harm mankind in the sense of making us less ethical as a race by accepting it.
 
arg-fallbackName="aluxor"/>
As for artificial intelligence i think it is not relevant to think about AI ethics, because if someone ever programmed a "bad" machine it surely would have some predictive behavior like "wanting to destroy someone's live", with this information a "free" AI or a common human with equivalent neural power, with free i mean that it does as the lived experiences tell it to do so WITHOUT any bias from a programmer(like "destroy X guy"), would have the advantage over it and easily neutralize it. And, in other words, if a programmer ever wants to create the most powerful AI out of his chip would have to give the machine "free will". it is not really Free because its determinated, but is free in a sense that it is not attached to someone else's criterium. (Sorry for any grammar mistake).
 
arg-fallbackName="ninja_lord666"/>
I've actually been thinking about his over the weekend, and I came to a pretty good conclusion. Why are humans ethical? Why do humans 'do the right thing?' It was an evolutionary advantage. Early hominids were, as we are now, social creatures, and the main thing about social groups is teamwork. If everyone helped out and worked together, more could get done. Therefore, they, and us, have a genetic 'desire' to not harm other humans. It's in our DNA, and the neat thing about DNA is that it's more or less just organic coding. Computers have code, right? So we could easily code in these 'desires' into robots, too. We humans are quite obviously fully sentient, yet we have restrictions. I could buy a gun and go on a killing spree, but my genetic 'coding' prevents me. The exact same thing could work with robots as the only real difference between us and them is that we're organic and they're not.
 
arg-fallbackName="borrofburi"/>
ninja_lord666 said:
The only way to 'limit' a robot's AI would be to not even give it AI in the first place and give it pseudo-AI

Or you could limit its access to hardware (and the internet).
 
arg-fallbackName="borrofburi"/>
The problem with ninja_lord666's genetic line of reasoning is that it doesn't prevent us humans from creating farm cows and wholesale slaughter of animals for food. It would work to keep extremely powerful robots from fighting other extremely powerful robots, but if they got strong enough we would be nothing more than cows to them.
 
arg-fallbackName="ninja_lord666"/>
borrofburi said:
The problem with ninja_lord666's genetic line of reasoning is that it doesn't prevent us humans from creating farm cows and wholesale slaughter of animals for food. It would work to keep extremely powerful robots from fighting other extremely powerful robots, but if they got strong enough we would be nothing more than cows to them.
Touché. We'd just need to find a way to remain important to robots, I guess. I don't exactly know how, but I'm sure we'll find a way...hopefully.
 
arg-fallbackName="Ciraric"/>
As a person that recently took an Artificial Intelligence course at a good university I can swear to you that we are nowhere near accomplishing what people think we are.

At the start of the course the class all thought it was possible. After the class we mostly agreed that it's foolish to believe that AI is an inevitability.
 
arg-fallbackName="aeroeng314"/>
borrofburi said:
The problem with ninja_lord666's genetic line of reasoning is that it doesn't prevent us humans from creating farm cows and wholesale slaughter of animals for food. It would work to keep extremely powerful robots from fighting other extremely powerful robots, but if they got strong enough we would be nothing more than cows to them.

Just like how the mentally handicapped are nothing but cows to us?
 
arg-fallbackName="scalyblue"/>
e2iPi said:
As far as being competitive with intelligent machines, at least we still have the plug :D

battery_morpheus.jpg
 
arg-fallbackName="Jotto999"/>
It's a bit tricky, because science at the moment isn't really able to properly "measure" and fully understand what "intelligence" is. Our own brains are still currently one of the more misunderstood and complex challenges for science. In order to create true intelligence, we will need to understand it better first. So I guess you could say, psychology and neurology type research may likely be a prerequisite to developing the technology for AI. This is just speculation, though, I am not qualified to know for sure.
 
Back
Top