• Welcome to League Of Reason Forums! Please read the rules before posting.
    If you are willing and able please consider making a donation to help with site overheads.
    Donations can be made via here

Artificial Intelligence

arg-fallbackName="Shaedys"/>
I just hope when they try to take over humanity they all get that blue screen of death.
 
arg-fallbackName="scalyblue"/>
No, they'll almost certainly be running Snow Leopard. Pray for that sad mac face.
 
arg-fallbackName="Livemike2"/>
borrofburi said:
The problem with ninja_lord666's genetic line of reasoning is that it doesn't prevent us humans from creating farm cows and wholesale slaughter of animals for food. It would work to keep extremely powerful robots from fighting other extremely powerful robots, but if they got strong enough we would be nothing more than cows to them.

If they treat us as cows, what's to stop their children doing the same to them? Simply code in the following idea "Destroying the previous, less powerful generation of sentients loses you the moral right not to be destroyed by subsequent, more powerful generations of sentients and is therefore wrong and dangerous.".
 
arg-fallbackName="scalyblue"/>
From a recent Obama speech

500x_obama-bots.jpg
 
arg-fallbackName="death"/>
ninja_lord666 said:
...unless we have somehow coded a "law of robotics" into it's sub-conscious.
" 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
We already have Three Laws of Robotics. ;)

As for 'allowing' robots full sentience or not, if you're even asking that question, you probably don't actually understand what AI is. True computer AI would be adaptable and able to learn. Therefore, it would be impossible to put a 'limiter' on it. That be like gathering a group of people asking whether or not we should 'allow' them to be as intelligent as another group of people. The only limitations true AI would have would be a restricted access to knowledge, but even then, we talking about knowledge, not intelligence; the two are very different. The only way to 'limit' a robot's AI would be to not even give it AI in the first place and give it pseudo-AI, and anyone who has played video games will tell you that the pseudo-AI would be much more dangerous. :p

3 Reason why i reject the laws as functional
numbers 1, i do not fear the robots taking control i fear the human / hacker master, the robots themself are most likely not programmed for evil but as we all know writing bug free / 100% secure code is imposible.

number 2: most likely the government / army will build the first robots with AI so the rules look more like this:

1. Robots must kill all humans / destroy all robots trying to gain unauthorize access to it because a robot is expensive.
2. a robot must obey any orders given to it by authorized humans if it will not conflict with the order of a higher level admin, or the abbility to defent itself because a robot is expensive.
3. a robot is not allowed to risk its own hardware to save a human live because a robot is expensive.

3 and the more serieus objection.
we build something self learning that wants to kill us, then prevent it from doing so.
That seems like a stupid solution: if where gonna write uber smart IA why not give it human survival as it prime directive WITHOUT comprozing human freedom, in all really it will thread us most likely act to use like we do to are pets.
the relation between us and are pets is so that we still lock it out of rooms etc.

thats why we need to make a solution where robots want to help us and yet not lock us up in a room, where we can not harm are selfs.
even when applying the 3 rules there is nothing preventing the robot from locking you up in a place where it can not hear your orders AND you can not harm yourself because you can not overwrite a command if you lock yourself out.
there for techinically speaking using "pure" logic the 3 laws are not 100 % secure in theory let alone in pratices.
 
arg-fallbackName="scalyblue"/>
Uhm, much of Asimov's work was written to illustrate ways the three laws could be circumvented, even if they seemed ironclad.
 
arg-fallbackName="int3h"/>
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." Eliezer Yudkowsky

I think this quote accurately describes what we can get if we just create very efficient general problem solving AGI..

There is interesting recent article on this topic:
http://www.good.is/post/Birthing-Gods

Fortunately there are some ideas to create friendly AI which doesnt turn earth into paper clips by default. :)

Quote from article:
The idea is that rather than programming values directly into a superintelligent machine before we switch it on, we should program into it a procedure for extracting human values by examining human beings' behavior and biology, including scanning our brains. The CEV algorithm would then work out what we would do after we had carefully reflected upon the facts of a situation, and then do exactly that,this is an extrapolation given more knowledge and time to think. For example, if the human race would, after careful reflection, decide to cure a certain set of tropical diseases in south-east Asia, then the CEV algorithm would make that happen. In the case that humans have vastly conflicting preferences, the algorithm has a problem on its hands,some sort of averaging or automated compromise solution would be needed.
 
arg-fallbackName="Sinue"/>
scalyblue said:
e2iPi said:
As far as being competitive with intelligent machines, at least we still have the plug :D

battery_morpheus.jpg

Oh Snap!


For the record, I don't think it's really going to be an "us" vs. "them" scenario. I think our march towards cyberization will eventually "merge" organic and simulated intelligence, and the first strong AI will emerge from efforts like the Blue Brain project which are emulating the human brain in a supercomputer. The first AI, I think, will be functionally human.

I also see no reason why Digital Humans and Analog Humans couldn't eventually mate to produce biological offspring. The genome of the Digital Human would have to be custom built, with perhaps the only the genetic code for the emulated brain being a necessary component. If you can build a genome, you can build an organism. Synthesizing the gametes would be a snap at that point.

Perhaps we're looking at more "Armitage III" future, rather than a "Terminator" future.
 
arg-fallbackName="morphles"/>
I studied IT so i know a little bit about AI. And from that I'd say (as someone here already told) that we are very far from human level intelligence. Another observation that "raw" simulation of brain on supercomputers would be immensely power hungry. You would probably need huge supercomputer with dedicated power station, so i guess there would be no reason to be afraid of such AI. First it would be human level and no super human, that fact that it would be artificial should note mater, its capabilities would be similar, but its requirements orders of magnitudes bigger (huge power station) so it would be relatively weak. Of course this is just a speculation and just food for thought. It just might turn out that most efficient "media" for intelligence might be very brain like ad that artificial intelligence would be very similar. Again thats just speculation. But so far making intelligence seems to be hard task, and history shows that people had overestimated their abilities to create human level AI. And it might turn out there is some limitation that would strongly limit the level of AI to be not much above human, like there are limits in physics i.e. speed of light, and other "limiting laws" or "constants". I 'd guess that cyborgs have the most potential, imagine the possibilities you'd have it you could connect your brain directly to computer, Internet...
 
arg-fallbackName="bemanos"/>
morphles said:
I studied IT so i know a little bit about AI. And from that I'd say (as someone here already told) that we are very far from human level intelligence. Another observation that "raw" simulation of brain on supercomputers would be immensely power hungry. You would probably need huge supercomputer with dedicated power station, so i guess there would be no reason to be afraid of such AI. First it would be human level and no super human, that fact that it would be artificial should note mater, its capabilities would be similar, but its requirements orders of magnitudes bigger (huge power station) so it would be relatively weak. Of course this is just a speculation and just food for thought. It just might turn out that most efficient "media" for intelligence might be very brain like ad that artificial intelligence would be very similar. Again thats just speculation. But so far making intelligence seems to be hard task, and history shows that people had overestimated their abilities to create human level AI. And it might turn out there is some limitation that would strongly limit the level of AI to be not much above human, like there are limits in physics i.e. speed of light, and other "limiting laws" or "constants". I 'd guess that cyborgs have the most potential, imagine the possibilities you'd have it you could connect your brain directly to computer, Internet...

interesting but i recently saw this : http://bluebrain.epfl.ch/ the chief professor sayed that by the year 2018 he may have ready a human level ai unit . opinions? is this possible?
 
arg-fallbackName="e2iPi"/>
bemanos said:
anyhow, when do you believe the first ai "beings" are going to be made?
2012 - that's what is going to precipitate the end of the world as predicted by the Myans :shock:

On a more serious note, it seems to me to be entirely possible that we would not even recognize the first machine AI. A machine would have a completely different perception of the world than ours, different motivations and therefore different goals. Perhaps the internet is already self-aware with no way of communicating with humanity? There is no evidence that I know of which would lead to such a conclusion, but it is nevertheless a possibility.

-1
 
arg-fallbackName="Ozymandyus"/>
There's always a mistake made in these discussions that equates Human level intelligence with BETTER intelligence. The problem with this assumption is that there are many different types of intelligence, some of which are useful in different situations. Generally speaking, when people talk about A.I. they are looking for imaginative intelligence, something at which humans excel at compared to humans. But we have to remember that in terms of mathematical intelligence, raw information processing intelligence, and other types of intelligence, computers are already lightyears ahead of us. As soon as you can combine imaginative intelligence with the intelligence that all computers have by default, it can be pretty scary.

On another note, we are always trying to limit what we call A.I. to something EXACTLY like our intelligence. Imaginative artificial intelligence is advancing in ways not exactly parallel to the ways we think, but still will be able to achieve complex problem solving without having to model an entire human brain. The genetic algorithm and neural net approaches which are currently used to do this sort of intelligence are being improved upon, and seem far more likely to produce imaginative problem solving than trying to feed information through some kind of human brain emulation. (Though I do believe that Blue Brain is useful for other reasons, of course, such as drug-brain interactions and human-computer interfacing. It just isn't the future of AI.)

I'm going to guess that we will be seeing more fruits from these approaches within 10 years (though these approaches are already used now. Is it AI as we often have imagined it? No... what is really the point in making a computer self-aware? By definition it immediately becomes less useful, and isn't that kind of the point?

Computer derives laws of motion with no inputs except watching a pendulum swing: http://www.wired.com/wiredscience/2009/04/newtonai/
Computer formulating hypotheses, designing and running experiments, analyzing data, and picks experiments to run next.
http://www.wired.com/wiredscience/2009/04/robotscientist/
 
arg-fallbackName="SagansHeroes"/>
Ciraric said:
As a person that recently took an Artificial Intelligence course at a good university I can swear to you that we are nowhere near accomplishing what people think we are.

At the start of the course the class all thought it was possible. After the class we mostly agreed that it's foolish to believe that AI is an inevitability.
I'd have to agree, simply because we are near making machines with greater processing power than our brains... does not mean we are close to making AI... The many trillions upon quadrillions upon infinity of coding required would surely collapse under itself. How lucid would it's intelligence be? could it learn and think of new things (like humans do every day) or would it have knowledge/intelligence of the era it was made in and not be able to out think a human in thinking of new ideas....

If we could make AI real in the next few centuries, I would say it's a very bad idea, technology is already moving so fast that we can't handle it and don't get to fully check the feedback systems of it's effects on earth, the environment and people. Think of all the previously "good" things that haven't turned out so good... Like Lead in petrol/paint/food/everything, CFC's(ozone depletion), Industrialisation(global warming), Tobacco.... etc.
We have so many things we need to catch up with and try to balance out/solve to correct the state of the earth at the moment, I think throwing another massive curve-ball such as an entirely different intelligent species would not be wise.
 
arg-fallbackName="UrbanMasque"/>
bemanos said:
anyhow, when do you believe the first ai "beings" are going to be made?

These robots have already been made, and the Japanese are finding ways for us to have sex with them. :lol:

... :? ..anyone?

Artificial Intelligence, by definition, already exists in several aspects of human society - I think the most common interaction we have with them are in video games. I don't think AI will ever manifest into what sci-fi depicts it as, Data on star trek will never exist with the current system of computing we have now, because computers do what we tell them to do. It will never get curious about its origins, are argue about its uniqueness, or get wrapped up in human nature - It will only do what you program it to do. What we are doing with robots now is to try to give them the illusion of free thinking, but again - it will only respond the way we command it to respond.

And if we could possibly create AI in the next few centuries, I hope it will be used to represent humans and sent on century long missions into deep space - but in all likeliness it will be used to fight wars.
 
arg-fallbackName="JustBusiness17"/>
I want my robot to be a Lion :arrow: Then I'll ride it whenever I need to run errands :cool:

article-0-041C93D5000005DC-260_634x553.jpg



We're going to have a lot of fun together :D

 
Back
Top