• Welcome to League Of Reason Forums! Please read the rules before posting.
    If you are willing and able please consider making a donation to help with site overheads.
    Donations can be made via here

A rant on AI

Master_Ghost_Knight

New Member
arg-fallbackName="Master_Ghost_Knight"/>
In regards to machine learning. Where do I even start?
Let me first preface that I’m personally not involved in projects directly relating to machine learning. I have colleagues that are, and we often discuss in passing some of what is happening in the field, so take it with a grain of salt. But regardless I’m very well reputed software engineer, and as a software architecture for my company I’m responsible for setting up most of the basic framework that everyone else has to work with, and with that comes some very important experiences, the relevance is going to be made clear. I have also in my spare time dabble has a hobby in investigating some limits of computation, more than just what computers can do today, it involves what even is a computer and what are the limits of whatever it is even possibly to do.

One of the problems that I often deal with is how do maximize the reliability of the system, how to ensure that we minimize the occurrence of problems, and to ensure that we get the most useful time out of our devices. Allot of it involves ways in which you structure your work, and the way you structure your algorithms to be resilient to problems, such that even someone does something wrong then the consequences are less serious or less noticeable, or give the opportunity for the problem to fix itself even before anyone can notice that something went wrong, and in some cases do the right thing despite the fact that you are asked to do something completely bonkers. Even though this technics do not solve the underlying bugs and some foolery that engineers do, it mitigates the problems in such a way that we are still reducing the downtime of devices.
Some of our tools have become so good at that in fact that it has become a problem on its own; engineers still make mistakes that you would want to fix. If the device had failed spectacularly, if it crashed and burn, you would notice it very quickly and you would have to fix it, but now because the system kind of does the right thing anyways (or you don’t see much of a difference) these problems go unnoticed. It becomes extremely hard to even realize that there is a problem.
And now we have also to design tools that do the exact opposite, i.e. to make sure that problems are noticeable to developers so that they are motivated to fix things. And it is not that uncommon to consider that because the system does the right thing anyway should we even bother to fix it. And there are some instances where this dynamic becomes so complicated that is quite hard to even test, it kind of feels that we are doing the right thing and the application of it seems to indicate that but it is very hard to be 100% sure.
And I’m still talking about more traditional code written by humans, with a very specific well known intent, things that we are at least able to understand exactly parts of it even if the system is too complex to be understood by only 1 person.

Now let’s shift our focus to machine learning.
The hot new trend happening in AI, machine learning, the way the product works is to me is very analogous to a very sophisticated mask that you put on top of a pattern to assist on pattern recognition. Similar to banburismus, except of having 1 layer you have multiple layers, instead of light on or off on each node you can have a range of values in each nodes, nodes can connect pretty much anywhere else, oh and you can also do math and other operations on it (in some cases it might even be Turing complete), and instead of looking at it, you can attach a servo that controls a robotic arm or something.
The technology is still pretty much in its infancy (of course it is), and it is not likely that at this stage that they could become self-aware and start rebelling against humans (their tests do not promote “self-awareness”).
If you asked me this question 2 years ago, you would get conservative answer and a very skeptical tone on the capabilities of such technics.
But the matter of fact is (given what we have achieved):
A) We are finding it extremely difficult to come with examples of a singularly well-defined task that a human can do that we cannot teach a computer to do better.
B) Some parts of our own brain that do very complicated stuff work in a similar fashion.
Yes it may not be perfect, but as my colleagues point out quite often, it doesn’t have to be perfect, it just has to be better than humans.
For example we you put a program designed with machine learning to drive autonomous cars, although in testing they almost invariably reacts correctly given the tested set of real world inputs, you still have that nagging feeling that because nobody understands exactly how it works that it is possible that there is a particular combination of inputs that we can’t predict or even hope to effectively test for that triggers a very unusual interaction in the neural network that makes it go berserk and crash the car into a tree.
In fact it is almost certain given the way it has been design, some sub-neural networks are trying to do some task that in some conditions will get it very wrong, but then there can also be other sub-networks that are responsible for mitigating things when some of those conditions occur, so you never get to see it as a problem (until of course those mitigating systems don’t work either).
But people are not much concerned when humans drive cars, and we know that humans can go wrong and drive themselves into trees. Humans can be intoxicated, they can be tiered, they can be distracted, they can miss certain details of miss to see things, and sometimes even unexpectedly get into an accident in the best conditions while they are focused of the task of driving.
This is not to say that an AI can’t get tiered, because as it turns out they actually can. And by tiered I don’t mean they all of a sudden “feel the need to go to sleep”, I mean that their behavior starts to degrade as time progresses, either because an integrative system saturates or overflows, or because some nodes get latched after some unusual states, and resetting the AI helps out bringing the performance up again. This “tiered” states are harder to weed out because they only occur after the system has been running for a long time, and your tests typically don’t run for that long.
And has a side note, robots driving people around isn’t even new, or rather fly. Modern aircraft have had for decades the ability to takeoff, fly from point A to B, and land, while the pilot is a sleep for the entire journey (which has actually happened more times than we are comfortable in admitting). Now a days a pilots job (even though they could fly the plane if necessary) is to oversee the machine that does the flying.
True they don’t use self-learned AI, but more traditional brains that we can understand and prove they work, but autonomous none the less.

We tend to use humans as sort of a gold standard for AI allot.
Sure these machines can only do one thing and one thing only, while humans can easily do multiple different tasks, but this is only a limitation of what we as humans imposed trough the design of training for the bots. You probably wouldn't do much better if you would only taught 1 task and 1 task only since you were a baby.
Humans have the advantage of having more than just the pattern recognition part in our brain, we have multiple sources of motivation, the goals are more fuzzy and less hard lined (which in terms gives us a more general sense of purpose and an illusion of spontaneous motivation), but we are also able to self-evaluate how well we did in trying to achieve a certain task and understand (in a more fuzzy) in what direction we should correct our behavior so that we are able to do better in the future. And our goals and the way we evaluate ourselves can also be molded to account for new information.

Take babies for example, as cute as they are, they are pretty much as dumb as they come. They will fail to perform even the most remedial of tasks. But as they are motivated to interact with the world, they will fail miserably time and time again, even if at an unconscious level they understand some of their own failures, and as experience accumulates they will start to tune the right strength of the muscles, the right time to pull each muscle relative to one another, they will start to do less of the things that fail and more of the things that succeed. Eventually human babies can become grown adults, capable of operating cars, hold jobs, and post dumb shit on the internet.
Yes we do have the benefit of allot of things that are innate, and some of those develop as we grow, biology does give us allot, but that only goes so far.
Take for example something as intuitive as speaking, we can identify parts of our brain that are crucial for the formation of speech, and if damaged I would not be able to talk, even tough I can still breath and move my mouth and tongue. None the less I'm reminded that right now I'm pressing buttons on a piece of plastic so that certain images appear on screen, and this images reminds me of sounds that play in my head in a language foreign to my own. Language is an extremely cultural thing, and it must be acquired from experience, biology alone is not enough.

Take as another example of learning to master a completely new skill. I think everybody has experienced how that feels like, you start by sucking really bad at it, it is difficult to do even the most basic of tasks, but with each failure and success you start to get "the hang of it", until you eventually get good at it, sometimes even so good that you can't remember how is it to suck (but suck you did).
But if you are asked to then explain the task you mastered to a novice, you will struggle, even if you can explain what is it you are trying to do and why you do something a certain way, often times the real way you yourself do it is "because it feels right" and not because of some logical deduction actively happening in your head.

On the current state of AI (although we start to see a shift from this) the "goals" are still pretty much set in stone by the initial programmer (it is not corrected to account for new information) the utility (or evaluation) function is also defined in advance often not telling you the direction where you more likely need to correct, and all this previous programs plus the program that does the tuning of the AI is a separate entity from the program does the task itself. In living organisms this processes are much more fluid.
Humans have been around longer and have the benefit of certain processes of learning that we ourselves don’t understand or even realize that they exist such that we can give it to robots.
But as we learn how to make better algorithms, and understand what new modules and features that we can add which makes learning (or understanding) things much more efficient and much more meaningful, computers will very quickly close that gap.
And the end of the day, our brains are nothing more than meet computers, in principle there is nothing it does that we can't do with silica.

Now here is where we start to kick things up a notch. Artificial brains have advantages that our meat brains do not.
For example you can very easily switch parts and upgrade artificial brains, dig in it innards and change things. And although this kind of possible to do in our meat brains, it is not something you want to do, not only because it is physically more difficult, if you make a mistake you can rollback time in an artificial brain while in your meat brain there is no undo button.
Computer brains also have the benefit of literally having access to literally a computer level of resources and reliability; imagine the experience of having gone through your math class while having access to an advanced calculator in your brain, actually imagine having that ability right now.
Computers now can also run extremely fast, and you can get a whole bunch of them to work together and expand its available resources for one collective task.
This ability to run AI in hyper-timing with hyper-resources is very powerful; you can run through hundreds of brains. Each brain can run through to a vast number of tests (“experience”) in a blink of an eye. Several orders of magnitude than what a human could ever do in a life time.
Sure human brains have benefit of having more sophisticated systems than current machine brains, but machines can still be able to surpass humans because the systems that they do have can be made really really good. At least better than what evolution could have done so far to humans.
Now the obvious question is, since the current state of AI is such that it is still can’t match the complexity of the human brain in terms of versatility is there a way to cheat, by lets say, oh copy a human brain into a virtual world?
The answer may not surprise you is yes. It is not that we don’t know how to do it, the only thing that is stopping us from doing that right now is because the human brain is quite complex, too complex for typical computers.
But with enough resources and motivation, combined with some innovation in computing performance, it will happen. It is not matter of if but when.
And this is where things start to get really scary. Because this raises so many difficult ethical, existential, legal, political concerns than we can answer right now, or even hope to answer in the near future. If there is a point in our development where our technology surpasses our ability to grow to appoint that is critically detrimental to human beings, this is it. We are simply not prepared for any of this. If you think humanity has problems now, you have seen nothing.

Sorry, this is already a long post, and I’m nowhere near done. So I think I will break this done into posts over time.
I will try to expand on some of those ethical, political concerns on the following posts.
 
arg-fallbackName="he_who_is_nobody"/>
Master_Ghost_Knight said:
Sorry, this is already a long post, and I’m nowhere near done. So I think I will break this done into posts over time.
I will try to expand on some of those ethical, political concerns on the following posts.

Thank you for sharing.

My biggest question does have to do with consciousnesses. I do not see a reason why a computer could not be considered conscious. I think it is amazing what machines are already doing, and I am wondering how long do you think it will be before we see a conscious machine?

maxresdefault.jpg
 
arg-fallbackName="Master_Ghost_Knight"/>
If a tree falls into the woods and nobody is there to hear it, does it still make a sound?

Now that we realize that it is possible to put a human brain on a computer (even though no necessarily the equipment now), it raises allot of questions.
Because if we can get the biological behavior accurate enough put that artificial brain on a device that could interact with (I/O) as humans can, there is nothing that would indicate that such a device would behave any differently from a real human.
You would not only be able to have a conversation with it, it would react emotionally, it would try to seek food when “it feels hungry”, it would “feel” bored, it would contemplate beautiful pieces of art, it would try to defend itself like a human would when instigated with “pain”, it would be able to tell you about its fears, it would feel sad, it would laugh, it would try to fight injustice, it would fall in love, it would even dream. It would behave in every way as if it was a real human.
This raises questions close to the core of what even is to be conscious, is there really a “ghost in the machine”?
Because there really isn’t anything you would be able to test that would be able to tell the difference between a flesh and blood human from a silicon human.
Sure humans are made of organic material, and the interaction in the neurons are allot more chemical than electrical, but is it really the difference in material that makes it a qualitative difference? What if built computer components with organic materials and made them communicate with neuro chemicals? Would it still be just a machine crushing numbers?
And to that effect, are we more than just machines made out organic material?

Maybe I feel that the right question might not be if computers can be conscious, but rather if they can be as conscious as us.
And the answer, I believe, is yes!
If it walks like a duck and it talks like a duck, it might just be a duck.

And this is an important question to answer, because the popular opinion right now is that machines as just dumb machines, as disposable property, but if they can feel pain then we have a problem. If you can inflict pain you can inflict cruelty, it might be sensible to put some form of legal protection in place in terms of what you can do to this brains.

The incredible ease to which you could anything to this brains is scary.
[showmore=Please do not read this if you are easily impressionable. The content therein WILL damage you psychologically.]Imagine if you could turn up the dial on the ability of experiencing suffering a thousand times more than real humans physically could, now imagine shoving such an AI into a virtual hell where it can only experience perfect suffering, oh and by the way we cranked up hyper-time so it can go into a year of experience in 1 second real time. And now put a thousand of those simulations running in basement somewhere, lock it and just forget about it.

You would make the Christian concept of Hell and mere walk in the park.

Consider then that, an AI running on a computer, has a particular encoding that describes the state of the brain, and that we translate that into a number and vice versa. The consequence of this is that there exists such a thing as truly evil numbers in the universe.
This is the stuff of nightmares. If someone came to me and put a gun to my head and asked “you are going to let me scan your brain (good enough such that it could be simulated) or I will blow your head off”, knowing that this sort of thing might be possible, I would honestly consider that perhaps I have had a good life so far and probably I should cut my losses.

Now that also poses another question. If an AI experiences suffering, and if we flick a switch to just end it, did it really matter?
After all the “suffering” would have been completely wiped out of existence, it is just gone, it no longer exists.
And before you answer that consider the analogous, if we kill a real person dead, did it matter that you tortured it beforehand?
These are important questions, too important to be left unanswered, because the technology that allows us to do this is right around the corner, and we are not prepared for it.
And the scary truth about it is that logical the answer to this seems to be No, that it doesn’t matter. After all suffering is pretty much just an emergent experience of conscious creatures.
If you are not a complete psychopath, you should feel this answer to be totally unsatisfying, you should get this uncomfortable feeling about “what if the consciousness experiencing the suffering is your own?”. It would certainly matter for you then wouldn’t it?
And perhaps in there lies the answer, at least a compromise I can sleep with.
That suffering is only truly meaningful to the conscious experiencing it. And even if the suffering of others is nothing to us, we as conscious creatures should follow a silent and unwritten contract that for all intent and purpose we should act as if it matter personally to us, and we should strive to minimize the suffering of other conscious beings, so that we create a world in which ourselves don’t become the victims.[/showmore]

Next I will try to touch on some political and economical problems.
 
arg-fallbackName="thenexttodie"/>
Master_Ghost_Knight said:
In regards to machine learning. Where do I even start?
Let me first preface that I’m personally not involved in projects directly relating to machine learning. I have colleagues that are, and we often discuss in passing some of what is happening in the field, so take it with a grain of salt. But regardless I’m very well reputed software engineer, and as a software architecture for my company I’m responsible for setting up most of the basic framework that everyone else has to work with, and with that comes some very important experiences, the relevance is going to be made clear. I have also in my spare time dabble has a hobby in investigating some limits of computation, more than just what computers can do today, it involves what even is a computer and what are the limits of whatever it is even possibly to do.

One of the problems that I often deal with is how do maximize the reliability of the system, how to ensure that we minimize the occurrence of problems, and to ensure that we get the most useful time out of our devices. Allot of it involves ways in which you structure your work, and the way you structure your algorithms to be resilient to problems, such that even someone does something wrong then the consequences are less serious or less noticeable, or give the opportunity for the problem to fix itself even before anyone can notice that something went wrong, and in some cases do the right thing despite the fact that you are asked to do something completely bonkers. Even though this technics do not solve the underlying bugs and some foolery that engineers do, it mitigates the problems in such a way that we are still reducing the downtime of devices.
Some of our tools have become so good at that in fact that it has become a problem on its own; engineers still make mistakes that you would want to fix. If the device had failed spectacularly, if it crashed and burn, you would notice it very quickly and you would have to fix it, but now because the system kind of does the right thing anyways (or you don’t see much of a difference) these problems go unnoticed. It becomes extremely hard to even realize that there is a problem.
And now we have also to design tools that do the exact opposite, i.e. to make sure that problems are noticeable to developers so that they are motivated to fix things. And it is not that uncommon to consider that because the system does the right thing anyway should we even bother to fix it. And there are some instances where this dynamic becomes so complicated that is quite hard to even test, it kind of feels that we are doing the right thing and the application of it seems to indicate that but it is very hard to be 100% sure.
And I’m still talking about more traditional code written by humans, with a very specific well known intent, things that we are at least able to understand exactly parts of it even if the system is too complex to be understood by only 1 person.

Now let’s shift our focus to machine learning.
The hot new trend happening in AI, machine learning, the way the product works is to me is very analogous to a very sophisticated mask that you put on top of a pattern to assist on pattern recognition. Similar to banburismus, except of having 1 layer you have multiple layers, instead of light on or off on each node you can have a range of values in each nodes, nodes can connect pretty much anywhere else, oh and you can also do math and other operations on it (in some cases it might even be Turing complete), and instead of looking at it, you can attach a servo that controls a robotic arm or something.
The technology is still pretty much in its infancy (of course it is), and it is not likely that at this stage that they could become self-aware and start rebelling against humans (their tests do not promote “self-awareness”).
If you asked me this question 2 years ago, you would get conservative answer and a very skeptical tone on the capabilities of such technics.
But the matter of fact is (given what we have achieved):
A) We are finding it extremely difficult to come with examples of a singularly well-defined task that a human can do that we cannot teach a computer to do better.
B) Some parts of our own brain that do very complicated stuff work in a similar fashion.
Yes it may not be perfect, but as my colleagues point out quite often, it doesn’t have to be perfect, it just has to be better than humans.
For example we you put a program designed with machine learning to drive autonomous cars, although in testing they almost invariably reacts correctly given the tested set of real world inputs, you still have that nagging feeling that because nobody understands exactly how it works that it is possible that there is a particular combination of inputs that we can’t predict or even hope to effectively test for that triggers a very unusual interaction in the neural network that makes it go berserk and crash the car into a tree.
In fact it is almost certain given the way it has been design, some sub-neural networks are trying to do some task that in some conditions will get it very wrong, but then there can also be other sub-networks that are responsible for mitigating things when some of those conditions occur, so you never get to see it as a problem (until of course those mitigating systems don’t work either).
But people are not much concerned when humans drive cars, and we know that humans can go wrong and drive themselves into trees. Humans can be intoxicated, they can be tiered, they can be distracted, they can miss certain details of miss to see things, and sometimes even unexpectedly get into an accident in the best conditions while they are focused of the task of driving.
This is not to say that an AI can’t get tiered, because as it turns out they actually can. And by tiered I don’t mean they all of a sudden “feel the need to go to sleep”, I mean that their behavior starts to degrade as time progresses, either because an integrative system saturates or overflows, or because some nodes get latched after some unusual states, and resetting the AI helps out bringing the performance up again. This “tiered” states are harder to weed out because they only occur after the system has been running for a long time, and your tests typically don’t run for that long.
And has a side note, robots driving people around isn’t even new, or rather fly. Modern aircraft have had for decades the ability to takeoff, fly from point A to B, and land, while the pilot is a sleep for the entire journey (which has actually happened more times than we are comfortable in admitting). Now a days a pilots job (even though they could fly the plane if necessary) is to oversee the machine that does the flying.
True they don’t use self-learned AI, but more traditional brains that we can understand and prove they work, but autonomous none the less.

We tend to use humans as sort of a gold standard for AI allot.
Sure these machines can only do one thing and one thing only, while humans can easily do multiple different tasks, but this is only a limitation of what we as humans imposed trough the design of training for the bots. You probably wouldn't do much better if you would only taught 1 task and 1 task only since you were a baby.
Humans have the advantage of having more than just the pattern recognition part in our brain, we have multiple sources of motivation, the goals are more fuzzy and less hard lined (which in terms gives us a more general sense of purpose and an illusion of spontaneous motivation), but we are also able to self-evaluate how well we did in trying to achieve a certain task and understand (in a more fuzzy) in what direction we should correct our behavior so that we are able to do better in the future. And our goals and the way we evaluate ourselves can also be molded to account for new information.

Take babies for example, as cute as they are, they are pretty much as dumb as they come. They will fail to perform even the most remedial of tasks. But as they are motivated to interact with the world, they will fail miserably time and time again, even if at an unconscious level they understand some of their own failures, and as experience accumulates they will start to tune the right strength of the muscles, the right time to pull each muscle relative to one another, they will start to do less of the things that fail and more of the things that succeed. Eventually human babies can become grown adults, capable of operating cars, hold jobs, and post dumb shit on the internet.
Yes we do have the benefit of allot of things that are innate, and some of those develop as we grow, biology does give us allot, but that only goes so far.
Take for example something as intuitive as speaking, we can identify parts of our brain that are crucial for the formation of speech, and if damaged I would not be able to talk, even tough I can still breath and move my mouth and tongue. None the less I'm reminded that right now I'm pressing buttons on a piece of plastic so that certain images appear on screen, and this images reminds me of sounds that play in my head in a language foreign to my own. Language is an extremely cultural thing, and it must be acquired from experience, biology alone is not enough.

Take as another example of learning to master a completely new skill. I think everybody has experienced how that feels like, you start by sucking really bad at it, it is difficult to do even the most basic of tasks, but with each failure and success you start to get "the hang of it", until you eventually get good at it, sometimes even so good that you can't remember how is it to suck (but suck you did).
But if you are asked to then explain the task you mastered to a novice, you will struggle, even if you can explain what is it you are trying to do and why you do something a certain way, often times the real way you yourself do it is "because it feels right" and not because of some logical deduction actively happening in your head.

On the current state of AI (although we start to see a shift from this) the "goals" are still pretty much set in stone by the initial programmer (it is not corrected to account for new information) the utility (or evaluation) function is also defined in advance often not telling you the direction where you more likely need to correct, and all this previous programs plus the program that does the tuning of the AI is a separate entity from the program does the task itself. In living organisms this processes are much more fluid.
Humans have been around longer and have the benefit of certain processes of learning that we ourselves don’t understand or even realize that they exist such that we can give it to robots.
But as we learn how to make better algorithms, and understand what new modules and features that we can add which makes learning (or understanding) things much more efficient and much more meaningful, computers will very quickly close that gap.
And the end of the day, our brains are nothing more than meet computers, in principle there is nothing it does that we can't do with silica.

Now here is where we start to kick things up a notch. Artificial brains have advantages that our meat brains do not.
For example you can very easily switch parts and upgrade artificial brains, dig in it innards and change things. And although this kind of possible to do in our meat brains, it is not something you want to do, not only because it is physically more difficult, if you make a mistake you can rollback time in an artificial brain while in your meat brain there is no undo button.
Computer brains also have the benefit of literally having access to literally a computer level of resources and reliability; imagine the experience of having gone through your math class while having access to an advanced calculator in your brain, actually imagine having that ability right now.
Computers now can also run extremely fast, and you can get a whole bunch of them to work together and expand its available resources for one collective task.
This ability to run AI in hyper-timing with hyper-resources is very powerful; you can run through hundreds of brains. Each brain can run through to a vast number of tests (“experience”) in a blink of an eye. Several orders of magnitude than what a human could ever do in a life time.
Sure human brains have benefit of having more sophisticated systems than current machine brains, but machines can still be able to surpass humans because the systems that they do have can be made really really good. At least better than what evolution could have done so far to humans.
Now the obvious question is, since the current state of AI is such that it is still can’t match the complexity of the human brain in terms of versatility is there a way to cheat, by lets say, oh copy a human brain into a virtual world?
The answer may not surprise you is yes. It is not that we don’t know how to do it, the only thing that is stopping us from doing that right now is because the human brain is quite complex, too complex for typical computers.
But with enough resources and motivation, combined with some innovation in computing performance, it will happen. It is not matter of if but when.
And this is where things start to get really scary. Because this raises so many difficult ethical, existential, legal, political concerns than we can answer right now, or even hope to answer in the near future. If there is a point in our development where our technology surpasses our ability to grow to appoint that is critically detrimental to human beings, this is it. We are simply not prepared for any of this. If you think humanity has problems now, you have seen nothing.

Sorry, this is already a long post, and I’m nowhere near done. So I think I will break this done into posts over time.
I will try to expand on some of those ethical, political concerns on the following posts.

I think if you would indent at the start of each new paragraph, it would make your post much more readable.
 
arg-fallbackName="thenexttodie"/>
Master_Ghost_Knight said:
Now that we realize that it is possible to put a human brain on a computer (even though no necessarily the equipment now).

What does your above sentence mean?
 
arg-fallbackName="Master_Ghost_Knight"/>
thenexttodie said:
Master_Ghost_Knight said:
Now that we realize that it is possible to put a human brain on a computer (even though no necessarily the equipment now).

What does your above sentence mean?

Sorry for my terrible English.

I mean we know how to emulate a human brain in a computer, we just don't have a powerful enough computer at the time to make it viable.
We have the know how, just not the resources.
 
arg-fallbackName="Sparhafoc"/>
Master_Ghost_Knight said:
Sorry for my terrible English.

I mean we know how to emulate a human brain in a computer, we just don't have a powerful enough computer at the time to make it viable.
We have the know how, just not the resources.

The question might also be 'why even bother?'

Putting all your mental eggs in one basket only makes sense for biological evolution, not to agent-led technological development!
 
arg-fallbackName="Akamia"/>
Heh. Some of the things in here might be of interest to some transhumanists I know.


Sent from my iPhone using Tapatalk Pro
 
arg-fallbackName="Master_Ghost_Knight"/>
Sparhafoc said:
The question might also be 'why even bother?
A. Because we can, thus we eventually will.
B. It is an easy cheat to get general AI without having to develop it independently. Of course as soon as we achieved that, it won't stay that way for long.
 
arg-fallbackName="Sparhafoc"/>
Master_Ghost_Knight said:
Sparhafoc said:
The question might also be 'why even bother?
A. Because we can, thus we eventually will.
B. It is an easy cheat to get general AI without having to develop it independently. Of course as soon as we achieved that, it won't stay that way for long.

Sorry, my meaning wasn't clear.

I am talking about emulating a human brain in a computer. My point therein is to ask why we would set out to achieve such a low bar? Biological and evolutionary constraints on the human brain simply don't apply to a simulated mind, so it would seem odd if we purposely handicapped that simulated mind to emulate those restrictions.
 
arg-fallbackName="Dragan Glas"/>
Greetings,

The main reason for AI research, from chess-playing software upwards, is to understand consciousness - as I'm sure you're aware.

The main problem with letting such software evolve itself is the fear that we wouldn't have any control over how it evolves or understand how/what it thinks. There are obvious dangers here, like the child who doesn't understand something and/or learns the wrong lesson, resulting in their becoming a psychopath (like Nilsen, who - at 7 - saw his grandfather in a open coffin at peace, and wished to give that same peace to others, hence the reason he killed a number of people).

Another reason for developing AI is that, if we can create hardware that can support a AI system, we might be able to transfer (replicate?) our minds to robots - like the Westworld series explores.

Kindest regards,

James
 
arg-fallbackName="Sparhafoc"/>
Dragan Glas said:
The main reason for AI research, from chess-playing software upwards, is to understand consciousness - as I'm sure you're aware.

The main problem with letting such software evolve itself is the fear that we wouldn't have any control over how it evolves or understand how/what it thinks.

Isn't that basically just as true for each and every single human individual; we don't really have any control over how they evolve mentally even if we try to normalize their experiences through education and enculturation, and while we may have some ideas about how another human thinks, we can't actually reside in the landscape of their thoughts - we can only partly visit through the vehicle of their self-reporting. Perhaps that freedom to self-create is actually a necessity to cross the Rubicon into consciousness?

Dragan Glas said:
Another reason for developing AI is that, if we can create hardware that can support a AI system, we might be able to transfer (replicate?) our minds to robots...

I expect the technical difficulty there is less in the ability to transfer mental states onto a different medium than it is to imagine humans as we are now being able to bear that. I expect that if 'we' ever achieve that, 'we' will bear very little resemblance mentally to what we are now.
 
arg-fallbackName="Dragan Glas"/>
Greetings,
Sparhafoc said:
Dragan Glas said:
The main reason for AI research, from chess-playing software upwards, is to understand consciousness - as I'm sure you're aware.

The main problem with letting such software evolve itself is the fear that we wouldn't have any control over how it evolves or understand how/what it thinks.

Isn't that basically just as true for each and every single human individual; we don't really have any control over how they evolve mentally even if we try to normalize their experiences through education and enculturation, and while we may have some ideas about how another human thinks, we can't actually reside in the landscape of their thoughts - we can only partly visit through the vehicle of their self-reporting. Perhaps that freedom to self-create is actually a necessity to cross the Rubicon into consciousness?
But the fact that we can educate/enculturate a child does allow us to have some control - one might say a considerable amount of control - over how the child evolves psychologically.

With an AI system, left to its own devices, we have less control over how it evolves - we might have some control over a neural net through the scenarios we feed it, to help it learn but I'm not so sure about a genuine AI system (consider the consequences of Skynet's accidental self-awareness in the Terminator films, due to how it interprets the humans' attempt to switch it off).
Sparhafoc said:
Dragan Glas said:
Another reason for developing AI is that, if we can create hardware that can support a AI system, we might be able to transfer (replicate?) our minds to robots...
I expect the technical difficulty there is less in the ability to transfer mental states onto a different medium than it is to imagine humans as we are now being able to bear that. I expect that if 'we' ever achieve that, 'we' will bear very little resemblance mentally to what we are now.
Agreed - I think, freed from mortality, we - or, at least, the vast majority - would become like Dorian Grey.

Kindest regards,

James
 
arg-fallbackName="Sparhafoc"/>
Dragan Glas said:
But the fact that we can educate/enculturate a child does allow us to have some control - one might say a considerable amount of control - over how the child evolves psychologically.

I am not sure that quite works out though, because that would then apportion some of the blame of a murderer's actions onto the educators/enculturators of that child. No matter how much input we control, ultimately the output is out of our hands. That process is what makes us who we are, and whatever is unique in us is achieved by exceeding our normalizing training... so could AI ever become conscious if the training wheels are welded on?

As an aside, I do like throwing this out as a topic when everyone's drunk: AI is just the next great ratchet up in complex evolution. Humanity can rest well knowing we have served a cosmic Purpose. Genes -> Memes -> Synthetic
 
arg-fallbackName="Master_Ghost_Knight"/>
Should AI have the same rights and responsibility as flesh and blood human beings?
If they think exactly like us, they will want it the same as we do.
Imagine now a society where robots could go to work to gain money for themselves, pay taxes, where they can own property, even become a CEO of a multi-million dollar company. Where crime can be commit too and by robots.
Human economy revolves allot around the biological needs of the human monkey. We need food to eat, we need ass-wipe, clothing, habitation to provide shelter from the harsh environment, protection for the moments of unconsciousness (that we call sleep) and comfort.
Robots don’t need most of these things. This is not to say that being a robot doesn’t come with its set of expenses, maybe early silicon humans would even more expensive to maintain than monkey humans. But because silicon humans are not stuck with just being monkey humans, they transcend the limitations of biology, and eventually will become cheaper to maintain.
Silicon humans would not only be capable of being better workers than monkeys, they would be able to happily work for less money. Thus monkey humans will find it really hard to compete in the work force, they will simply be outperformed in every aspect imaginable without even a change to compete. If class warfare is already a real thing with just humans, good luck solving that problem!

But being synthetic is also not without its problems. Pulling of identity theft and taking over someone else’s property on monkey humans is quite hard.
It is possible to pull off identity theft on biological humans by forging documents, but this can easily be combated by the original person presenting themselves to the authorities. Real humans are hard to fake and reproduce, sure you can clone a person, but babies don’t have much property to steal, and we can’t make full adults yet, much less fully functional adults, and even if you could it is not really that useful. When you want to steal something from someone, you are an existing person that wants to transfer the property of some other existing person on to you. Creating a fully functional new person to that effect (that didn’t existed before, would not only be too difficult to pull off) and transferring the property to them doesn’t help you allot.
Recreating biology is hard because we can’t make it, recreating a synthetic on the other hand is easy, because all synthetics must be made. If you can make the original, you can make the copy, and if you can make a copy, how hard can it be for a synthetic to change itself to be exactly like the original and take its place?
If the reward is great the incentive to do this is also great. Maybe you might think that we could use some special cryptographic key or some other hard to copy mechanism. But think of the world of today where the looming danger of hacking or forging is always present. In any case a synthetic human is still a human that exists in the physical world, it can always be copied, you can always steal crucial identity parts from them. And the system just has to fail once and the game is over forever.

Let’s now think what would happen if a synthetic was caught committing a crime. How would you be able to prove that it was them and not a copy? Even if you could tell the difference, how would you even punish a robot? Would the treat of Jail time still serve as an effective deterrent? What if the AI had the ability to shut itself off for the duration of the sentence, would they have learned anything?


Consider also more serious socio-political implications. The dreaded machine revolution capable of destroying all humans.
You would think that because human robots where made from brains from real humans that they would be more sympathetic to biological humans, because after all, their minds are the minds of someone who was once human. But even if they were the most angelic of conscious as a biological person, how many a powerful man with good intentions have become a tyrant?
And we are not talking just about any person, it’s a person who has become detached from the human condition, that no longer has human issues. Someone who can not only become very smart, they can become smarter than any human could possibly be.
Most of how our brain works, even tough very capable of doing allot of things, it has evolved to solve the problem of reproduction. Machines don't have that problem.
Imagine now a synthetic that has become very economically powerful, make lots of copies of themselves, and control vast resources such that it could keep making copies of itself, or even capable of controlling a small country comprised of copies of itself. How exactly are you going to stop it if it decides to take over the world?
Keep in mind it is not just the smartest human, it is smarter than any human can be, and it is capable of working in over-time, oh and there is a whole army of them, and they are more durable then flesh and blood humans.
A self-sufficient AI army may even render a nuclear treat obsolete. Even though nuclear bombs would still work, and will still make a mess of a synthetic army, a well-made synthetic might be able to survive it (definitely better than humans could). And if such an AI is willing to sacrifice copies of itself to achieve its goals, what could you possibly do to deter it?
And if such an imaginary war scenario were to play out, and flesh humans managed by shear miracle to win such a war. How would you be able to be sure that you destroyed every single one of them? Because even if one managed to hide in a remote corner of the world, they would still be able emerge hundreds of years later to try again.
It’s a losing battle.

But not everything about synthetic humans is bleak. There are reasons that despite the risks, might still be worth it to have them.
It is starting to be a very common theme, a common desire for humanity to be able to expand, to reach for the stars and colonize the galaxy.
Monkey humans, have no change in hell of doing that. The distances are simply too great, the perils to many, and the flesh is too fragile to survive. Synthetics would not have such limitations, and currently they are our only real chance to have our influence stretch beyond our own world.
However there would still be the question is, if they were able to do it, would they want to?
 
arg-fallbackName="momo666"/>
I hope this question is on the subject. I was thinking to ask this in private but I think it's better to ask it in public; that way other people can learn something interesting.

I want to know if there is a computational limit and if so, what is that ? More specifically, how much computational power can you cram in the space of say, something with a volume equal to the human brain (or whatever object you can think of) ? I suppose there is a limit but I have no idea what that is or what such power could calculate/simulate once it gets there.

If this question relies on too many unanswered topics, say physics we have not yet demonstrated; how about we divide the answer in two parts ? One with an answer that is according to the laws of physics as we currently know them. The other with the most speculative physics you can think of.
 
arg-fallbackName="Master_Ghost_Knight"/>
A simple answer is, yes, there absolutely is a limit.
Now what that limit is depends a little bit on what you consider "computational power", and even "space" for that matter.

I will start by using a more broader sense of computation where there is a system that performs a task and generates an output.
Now this system must be able to produce different types of outputs and you must be able to distinguish between them.
If you have 2 states that can not tell apart, they are for all intent and purpose the same state, and I mean if there is no experiment (or process of any kind) that could ever produce a discernible difference as far as the Universe is concerned they are the same thing, and any type of "ephemeral difference" you could have ever conceive is forever lost to the universe. If there are no distinct states, then you could simplify the system by replacing it with literally nothing (and just assume the result). And nothing does not a computer make.

Now if you consider that in physics no distance shorter than a plank length (which is very much not infinitesimal) makes any sense, the maximum theoretical "computational power" (here defined as distinguishable states), ignoring every other consideration in physics (for simplicity sake), is all the possible permutation of particles that you could find in a given volume densely packed up to the plank length.
This is an absolutely astronomical number that I'm not even going to try to calculate, but definitely finite.

But you don't have the slightest chance of creating such a system where you can densely pack particles at the plank length, for several reasons:
1. When you pack allot of energy together, space has a tendency of collapsing into a black hole. And from there you would lose the ability to distinguish the permutation of the internal arrangement. Which for all intents and purposes, regardless of what happens inside a black hole, the outside of the black hole can not observe (distinguish) the state inside. And thus by the distinguishability principle, information was lost. It is not totally lost, and here i need to point you to the holographic principle in which "information" about this system is proportional to the area of the boundary (rather than the volume).
2. Particles at these small scales can not stand still, one moment they have a particular configuration, the next moment they are just gone. Good luck extracting anything meaningful out of it. Specially when you consider the next point.
3. You need to observe the configuration. And to do that you need to interact with the system. If you can't observe it by definition you can't distinguish it from different permutations of its states. You have things like the Heisenberg uncertainty principle, that limits how much you can even know about a measurement, and even if that wasn't a thing, good luck creating a system capable of probing every single block of space at the plank length in any area of any size.
4. At these scales virtual particles pop in an out of existence all the time, that for all intent and purposes it appears random to us. And this introduces what we call noise in your data, which effectively means that you can not tell the difference between a result with a type of (unknown) random noise patern applied to it from a different result with a different noise patern.

From an engineering stand point, I would say that a more realistic very best density that we can ever hope for ever, is the permutation of maybe a few electrons in the outer shell of atoms, where the system is still stable enough (but just barely) in order to do anything useful. Which is quite a smaller number.

But we haven't yet gotten to any practical design considerations, if we look at modern CPU, even tough we can make very small transistors (only a few atoms thin) allot of the volume is wasted on electron guides (or internal wiring), i.e. a physical element who's sole responsibility is to conduct electrons from an electron pool (source) to the transistor such that the operation can be made, and then dump that electron out such to make room for the next operation. So an atomic packing is not realistic either.

If you interpret a computation as being like a single operation like an addition, then you are going to need hundreds of transistors just to do a simple addition. And your CPU doesn't just do addition, it does multiplication, it does boolean logic, it does control flow, etc. and all those operations need different dedicated transistor arrangements, while your CPU is doing that one operation the vast majority of its content is simply wasted.
However this can be improved by employing a different philosophy to computer design that does not operate based on trying to make a math based computer.

However if you are able to distort space, things get a little bit more complicated, because you will be able to fit more computer within the same bounded region if this region is distorted to fit more space. However I'm no physicist to be able to tell you how much you would be able to distort space without it collapsing into a black hole such that you could pull off that trick.

But we will definitely be able to beat neurons easily, neurons have a feature size in the range of micrometers, while we can create transistors with the feature length of about a couple of manometers (about 1000 smaller, which to a rough approximation means something 100.000.000 transistors per single neuron, this does not mean we can make a practical design with that).
 
Back
Top