Artificial Intelligence will be available in the near future, no question about it. Research on Artificial Intelligence is already racing and the technology required to support it will be here in less than a decade.
So, where do we draw the line? Do we let robots have full intelligence (abstract reasoning, decision making, critical thinking ect.) which is only associated with humanity? If so, what would we do with them (use them as slaves, give them equal rights)? Or is it better to keep them in a sub-human form of intelligence with humans making the critical decisions for them?
I for one chose the latter if we do not find a way to make humans competitive with robots. If we do find a way to make us competitive (cyborgs a.k.a Ghost in the Shell), I would chose the former.
These, of course, are not the only two possibilities available to us. So feel free to post what you think about artificial intelligence.
So, where do we draw the line? Do we let robots have full intelligence (abstract reasoning, decision making, critical thinking ect.) which is only associated with humanity? If so, what would we do with them (use them as slaves, give them equal rights)? Or is it better to keep them in a sub-human form of intelligence with humans making the critical decisions for them?
I for one chose the latter if we do not find a way to make humans competitive with robots. If we do find a way to make us competitive (cyborgs a.k.a Ghost in the Shell), I would chose the former.
These, of course, are not the only two possibilities available to us. So feel free to post what you think about artificial intelligence.