Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Finger said:Too late. It's already happened courtesy of Google.
IrBubble said:e2iPi said:at least we still have the plug
-1
Heh, that is until they find a way to outsmart us.
borrofburi said:The problem with ninja_lord666's genetic line of reasoning is that it doesn't prevent us humans from creating farm cows and wholesale slaughter of animals for food. It would work to keep extremely powerful robots from fighting other extremely powerful robots, but if they got strong enough we would be nothing more than cows to them.
ninja_lord666 said:" 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm....unless we have somehow coded a "law of robotics" into it's sub-conscious.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
We already have Three Laws of Robotics.
As for 'allowing' robots full sentience or not, if you're even asking that question, you probably don't actually understand what AI is. True computer AI would be adaptable and able to learn. Therefore, it would be impossible to put a 'limiter' on it. That be like gathering a group of people asking whether or not we should 'allow' them to be as intelligent as another group of people. The only limitations true AI would have would be a restricted access to knowledge, but even then, we talking about knowledge, not intelligence; the two are very different. The only way to 'limit' a robot's AI would be to not even give it AI in the first place and give it pseudo-AI, and anyone who has played video games will tell you that the pseudo-AI would be much more dangerous.
The idea is that rather than programming values directly into a superintelligent machine before we switch it on, we should program into it a procedure for extracting human values by examining human beings' behavior and biology, including scanning our brains. The CEV algorithm would then work out what we would do after we had carefully reflected upon the facts of a situation, and then do exactly that,this is an extrapolation given more knowledge and time to think. For example, if the human race would, after careful reflection, decide to cure a certain set of tropical diseases in south-east Asia, then the CEV algorithm would make that happen. In the case that humans have vastly conflicting preferences, the algorithm has a problem on its hands,some sort of averaging or automated compromise solution would be needed.
scalyblue said:e2iPi said:As far as being competitive with intelligent machines, at least we still have the plug
morphles said:I studied IT so i know a little bit about AI. And from that I'd say (as someone here already told) that we are very far from human level intelligence. Another observation that "raw" simulation of brain on supercomputers would be immensely power hungry. You would probably need huge supercomputer with dedicated power station, so i guess there would be no reason to be afraid of such AI. First it would be human level and no super human, that fact that it would be artificial should note mater, its capabilities would be similar, but its requirements orders of magnitudes bigger (huge power station) so it would be relatively weak. Of course this is just a speculation and just food for thought. It just might turn out that most efficient "media" for intelligence might be very brain like ad that artificial intelligence would be very similar. Again thats just speculation. But so far making intelligence seems to be hard task, and history shows that people had overestimated their abilities to create human level AI. And it might turn out there is some limitation that would strongly limit the level of AI to be not much above human, like there are limits in physics i.e. speed of light, and other "limiting laws" or "constants". I 'd guess that cyborgs have the most potential, imagine the possibilities you'd have it you could connect your brain directly to computer, Internet...
2012 - that's what is going to precipitate the end of the world as predicted by the Myans :shock:bemanos said:anyhow, when do you believe the first ai "beings" are going to be made?
I'd have to agree, simply because we are near making machines with greater processing power than our brains... does not mean we are close to making AI... The many trillions upon quadrillions upon infinity of coding required would surely collapse under itself. How lucid would it's intelligence be? could it learn and think of new things (like humans do every day) or would it have knowledge/intelligence of the era it was made in and not be able to out think a human in thinking of new ideas....Ciraric said:As a person that recently took an Artificial Intelligence course at a good university I can swear to you that we are nowhere near accomplishing what people think we are.
At the start of the course the class all thought it was possible. After the class we mostly agreed that it's foolish to believe that AI is an inevitability.
bemanos said:anyhow, when do you believe the first ai "beings" are going to be made?