Master_Ghost_Knight
New Member
In regards to machine learning. Where do I even start?
Let me first preface that I’m personally not involved in projects directly relating to machine learning. I have colleagues that are, and we often discuss in passing some of what is happening in the field, so take it with a grain of salt. But regardless I’m very well reputed software engineer, and as a software architecture for my company I’m responsible for setting up most of the basic framework that everyone else has to work with, and with that comes some very important experiences, the relevance is going to be made clear. I have also in my spare time dabble has a hobby in investigating some limits of computation, more than just what computers can do today, it involves what even is a computer and what are the limits of whatever it is even possibly to do.
One of the problems that I often deal with is how do maximize the reliability of the system, how to ensure that we minimize the occurrence of problems, and to ensure that we get the most useful time out of our devices. Allot of it involves ways in which you structure your work, and the way you structure your algorithms to be resilient to problems, such that even someone does something wrong then the consequences are less serious or less noticeable, or give the opportunity for the problem to fix itself even before anyone can notice that something went wrong, and in some cases do the right thing despite the fact that you are asked to do something completely bonkers. Even though this technics do not solve the underlying bugs and some foolery that engineers do, it mitigates the problems in such a way that we are still reducing the downtime of devices.
Some of our tools have become so good at that in fact that it has become a problem on its own; engineers still make mistakes that you would want to fix. If the device had failed spectacularly, if it crashed and burn, you would notice it very quickly and you would have to fix it, but now because the system kind of does the right thing anyways (or you don’t see much of a difference) these problems go unnoticed. It becomes extremely hard to even realize that there is a problem.
And now we have also to design tools that do the exact opposite, i.e. to make sure that problems are noticeable to developers so that they are motivated to fix things. And it is not that uncommon to consider that because the system does the right thing anyway should we even bother to fix it. And there are some instances where this dynamic becomes so complicated that is quite hard to even test, it kind of feels that we are doing the right thing and the application of it seems to indicate that but it is very hard to be 100% sure.
And I’m still talking about more traditional code written by humans, with a very specific well known intent, things that we are at least able to understand exactly parts of it even if the system is too complex to be understood by only 1 person.
Now let’s shift our focus to machine learning.
The hot new trend happening in AI, machine learning, the way the product works is to me is very analogous to a very sophisticated mask that you put on top of a pattern to assist on pattern recognition. Similar to banburismus, except of having 1 layer you have multiple layers, instead of light on or off on each node you can have a range of values in each nodes, nodes can connect pretty much anywhere else, oh and you can also do math and other operations on it (in some cases it might even be Turing complete), and instead of looking at it, you can attach a servo that controls a robotic arm or something.
The technology is still pretty much in its infancy (of course it is), and it is not likely that at this stage that they could become self-aware and start rebelling against humans (their tests do not promote “self-awareness”).
If you asked me this question 2 years ago, you would get conservative answer and a very skeptical tone on the capabilities of such technics.
But the matter of fact is (given what we have achieved):
A) We are finding it extremely difficult to come with examples of a singularly well-defined task that a human can do that we cannot teach a computer to do better.
B) Some parts of our own brain that do very complicated stuff work in a similar fashion.
Yes it may not be perfect, but as my colleagues point out quite often, it doesn’t have to be perfect, it just has to be better than humans.
For example we you put a program designed with machine learning to drive autonomous cars, although in testing they almost invariably reacts correctly given the tested set of real world inputs, you still have that nagging feeling that because nobody understands exactly how it works that it is possible that there is a particular combination of inputs that we can’t predict or even hope to effectively test for that triggers a very unusual interaction in the neural network that makes it go berserk and crash the car into a tree.
In fact it is almost certain given the way it has been design, some sub-neural networks are trying to do some task that in some conditions will get it very wrong, but then there can also be other sub-networks that are responsible for mitigating things when some of those conditions occur, so you never get to see it as a problem (until of course those mitigating systems don’t work either).
But people are not much concerned when humans drive cars, and we know that humans can go wrong and drive themselves into trees. Humans can be intoxicated, they can be tiered, they can be distracted, they can miss certain details of miss to see things, and sometimes even unexpectedly get into an accident in the best conditions while they are focused of the task of driving.
This is not to say that an AI can’t get tiered, because as it turns out they actually can. And by tiered I don’t mean they all of a sudden “feel the need to go to sleep”, I mean that their behavior starts to degrade as time progresses, either because an integrative system saturates or overflows, or because some nodes get latched after some unusual states, and resetting the AI helps out bringing the performance up again. This “tiered” states are harder to weed out because they only occur after the system has been running for a long time, and your tests typically don’t run for that long.
And has a side note, robots driving people around isn’t even new, or rather fly. Modern aircraft have had for decades the ability to takeoff, fly from point A to B, and land, while the pilot is a sleep for the entire journey (which has actually happened more times than we are comfortable in admitting). Now a days a pilots job (even though they could fly the plane if necessary) is to oversee the machine that does the flying.
True they don’t use self-learned AI, but more traditional brains that we can understand and prove they work, but autonomous none the less.
We tend to use humans as sort of a gold standard for AI allot.
Sure these machines can only do one thing and one thing only, while humans can easily do multiple different tasks, but this is only a limitation of what we as humans imposed trough the design of training for the bots. You probably wouldn't do much better if you would only taught 1 task and 1 task only since you were a baby.
Humans have the advantage of having more than just the pattern recognition part in our brain, we have multiple sources of motivation, the goals are more fuzzy and less hard lined (which in terms gives us a more general sense of purpose and an illusion of spontaneous motivation), but we are also able to self-evaluate how well we did in trying to achieve a certain task and understand (in a more fuzzy) in what direction we should correct our behavior so that we are able to do better in the future. And our goals and the way we evaluate ourselves can also be molded to account for new information.
Take babies for example, as cute as they are, they are pretty much as dumb as they come. They will fail to perform even the most remedial of tasks. But as they are motivated to interact with the world, they will fail miserably time and time again, even if at an unconscious level they understand some of their own failures, and as experience accumulates they will start to tune the right strength of the muscles, the right time to pull each muscle relative to one another, they will start to do less of the things that fail and more of the things that succeed. Eventually human babies can become grown adults, capable of operating cars, hold jobs, and post dumb shit on the internet.
Yes we do have the benefit of allot of things that are innate, and some of those develop as we grow, biology does give us allot, but that only goes so far.
Take for example something as intuitive as speaking, we can identify parts of our brain that are crucial for the formation of speech, and if damaged I would not be able to talk, even tough I can still breath and move my mouth and tongue. None the less I'm reminded that right now I'm pressing buttons on a piece of plastic so that certain images appear on screen, and this images reminds me of sounds that play in my head in a language foreign to my own. Language is an extremely cultural thing, and it must be acquired from experience, biology alone is not enough.
Take as another example of learning to master a completely new skill. I think everybody has experienced how that feels like, you start by sucking really bad at it, it is difficult to do even the most basic of tasks, but with each failure and success you start to get "the hang of it", until you eventually get good at it, sometimes even so good that you can't remember how is it to suck (but suck you did).
But if you are asked to then explain the task you mastered to a novice, you will struggle, even if you can explain what is it you are trying to do and why you do something a certain way, often times the real way you yourself do it is "because it feels right" and not because of some logical deduction actively happening in your head.
On the current state of AI (although we start to see a shift from this) the "goals" are still pretty much set in stone by the initial programmer (it is not corrected to account for new information) the utility (or evaluation) function is also defined in advance often not telling you the direction where you more likely need to correct, and all this previous programs plus the program that does the tuning of the AI is a separate entity from the program does the task itself. In living organisms this processes are much more fluid.
Humans have been around longer and have the benefit of certain processes of learning that we ourselves don’t understand or even realize that they exist such that we can give it to robots.
But as we learn how to make better algorithms, and understand what new modules and features that we can add which makes learning (or understanding) things much more efficient and much more meaningful, computers will very quickly close that gap.
And the end of the day, our brains are nothing more than meet computers, in principle there is nothing it does that we can't do with silica.
Now here is where we start to kick things up a notch. Artificial brains have advantages that our meat brains do not.
For example you can very easily switch parts and upgrade artificial brains, dig in it innards and change things. And although this kind of possible to do in our meat brains, it is not something you want to do, not only because it is physically more difficult, if you make a mistake you can rollback time in an artificial brain while in your meat brain there is no undo button.
Computer brains also have the benefit of literally having access to literally a computer level of resources and reliability; imagine the experience of having gone through your math class while having access to an advanced calculator in your brain, actually imagine having that ability right now.
Computers now can also run extremely fast, and you can get a whole bunch of them to work together and expand its available resources for one collective task.
This ability to run AI in hyper-timing with hyper-resources is very powerful; you can run through hundreds of brains. Each brain can run through to a vast number of tests (“experience”) in a blink of an eye. Several orders of magnitude than what a human could ever do in a life time.
Sure human brains have benefit of having more sophisticated systems than current machine brains, but machines can still be able to surpass humans because the systems that they do have can be made really really good. At least better than what evolution could have done so far to humans.
Now the obvious question is, since the current state of AI is such that it is still can’t match the complexity of the human brain in terms of versatility is there a way to cheat, by lets say, oh copy a human brain into a virtual world?
The answer may not surprise you is yes. It is not that we don’t know how to do it, the only thing that is stopping us from doing that right now is because the human brain is quite complex, too complex for typical computers.
But with enough resources and motivation, combined with some innovation in computing performance, it will happen. It is not matter of if but when.
And this is where things start to get really scary. Because this raises so many difficult ethical, existential, legal, political concerns than we can answer right now, or even hope to answer in the near future. If there is a point in our development where our technology surpasses our ability to grow to appoint that is critically detrimental to human beings, this is it. We are simply not prepared for any of this. If you think humanity has problems now, you have seen nothing.
Sorry, this is already a long post, and I’m nowhere near done. So I think I will break this done into posts over time.
I will try to expand on some of those ethical, political concerns on the following posts.
Let me first preface that I’m personally not involved in projects directly relating to machine learning. I have colleagues that are, and we often discuss in passing some of what is happening in the field, so take it with a grain of salt. But regardless I’m very well reputed software engineer, and as a software architecture for my company I’m responsible for setting up most of the basic framework that everyone else has to work with, and with that comes some very important experiences, the relevance is going to be made clear. I have also in my spare time dabble has a hobby in investigating some limits of computation, more than just what computers can do today, it involves what even is a computer and what are the limits of whatever it is even possibly to do.
One of the problems that I often deal with is how do maximize the reliability of the system, how to ensure that we minimize the occurrence of problems, and to ensure that we get the most useful time out of our devices. Allot of it involves ways in which you structure your work, and the way you structure your algorithms to be resilient to problems, such that even someone does something wrong then the consequences are less serious or less noticeable, or give the opportunity for the problem to fix itself even before anyone can notice that something went wrong, and in some cases do the right thing despite the fact that you are asked to do something completely bonkers. Even though this technics do not solve the underlying bugs and some foolery that engineers do, it mitigates the problems in such a way that we are still reducing the downtime of devices.
Some of our tools have become so good at that in fact that it has become a problem on its own; engineers still make mistakes that you would want to fix. If the device had failed spectacularly, if it crashed and burn, you would notice it very quickly and you would have to fix it, but now because the system kind of does the right thing anyways (or you don’t see much of a difference) these problems go unnoticed. It becomes extremely hard to even realize that there is a problem.
And now we have also to design tools that do the exact opposite, i.e. to make sure that problems are noticeable to developers so that they are motivated to fix things. And it is not that uncommon to consider that because the system does the right thing anyway should we even bother to fix it. And there are some instances where this dynamic becomes so complicated that is quite hard to even test, it kind of feels that we are doing the right thing and the application of it seems to indicate that but it is very hard to be 100% sure.
And I’m still talking about more traditional code written by humans, with a very specific well known intent, things that we are at least able to understand exactly parts of it even if the system is too complex to be understood by only 1 person.
Now let’s shift our focus to machine learning.
The hot new trend happening in AI, machine learning, the way the product works is to me is very analogous to a very sophisticated mask that you put on top of a pattern to assist on pattern recognition. Similar to banburismus, except of having 1 layer you have multiple layers, instead of light on or off on each node you can have a range of values in each nodes, nodes can connect pretty much anywhere else, oh and you can also do math and other operations on it (in some cases it might even be Turing complete), and instead of looking at it, you can attach a servo that controls a robotic arm or something.
The technology is still pretty much in its infancy (of course it is), and it is not likely that at this stage that they could become self-aware and start rebelling against humans (their tests do not promote “self-awareness”).
If you asked me this question 2 years ago, you would get conservative answer and a very skeptical tone on the capabilities of such technics.
But the matter of fact is (given what we have achieved):
A) We are finding it extremely difficult to come with examples of a singularly well-defined task that a human can do that we cannot teach a computer to do better.
B) Some parts of our own brain that do very complicated stuff work in a similar fashion.
Yes it may not be perfect, but as my colleagues point out quite often, it doesn’t have to be perfect, it just has to be better than humans.
For example we you put a program designed with machine learning to drive autonomous cars, although in testing they almost invariably reacts correctly given the tested set of real world inputs, you still have that nagging feeling that because nobody understands exactly how it works that it is possible that there is a particular combination of inputs that we can’t predict or even hope to effectively test for that triggers a very unusual interaction in the neural network that makes it go berserk and crash the car into a tree.
In fact it is almost certain given the way it has been design, some sub-neural networks are trying to do some task that in some conditions will get it very wrong, but then there can also be other sub-networks that are responsible for mitigating things when some of those conditions occur, so you never get to see it as a problem (until of course those mitigating systems don’t work either).
But people are not much concerned when humans drive cars, and we know that humans can go wrong and drive themselves into trees. Humans can be intoxicated, they can be tiered, they can be distracted, they can miss certain details of miss to see things, and sometimes even unexpectedly get into an accident in the best conditions while they are focused of the task of driving.
This is not to say that an AI can’t get tiered, because as it turns out they actually can. And by tiered I don’t mean they all of a sudden “feel the need to go to sleep”, I mean that their behavior starts to degrade as time progresses, either because an integrative system saturates or overflows, or because some nodes get latched after some unusual states, and resetting the AI helps out bringing the performance up again. This “tiered” states are harder to weed out because they only occur after the system has been running for a long time, and your tests typically don’t run for that long.
And has a side note, robots driving people around isn’t even new, or rather fly. Modern aircraft have had for decades the ability to takeoff, fly from point A to B, and land, while the pilot is a sleep for the entire journey (which has actually happened more times than we are comfortable in admitting). Now a days a pilots job (even though they could fly the plane if necessary) is to oversee the machine that does the flying.
True they don’t use self-learned AI, but more traditional brains that we can understand and prove they work, but autonomous none the less.
We tend to use humans as sort of a gold standard for AI allot.
Sure these machines can only do one thing and one thing only, while humans can easily do multiple different tasks, but this is only a limitation of what we as humans imposed trough the design of training for the bots. You probably wouldn't do much better if you would only taught 1 task and 1 task only since you were a baby.
Humans have the advantage of having more than just the pattern recognition part in our brain, we have multiple sources of motivation, the goals are more fuzzy and less hard lined (which in terms gives us a more general sense of purpose and an illusion of spontaneous motivation), but we are also able to self-evaluate how well we did in trying to achieve a certain task and understand (in a more fuzzy) in what direction we should correct our behavior so that we are able to do better in the future. And our goals and the way we evaluate ourselves can also be molded to account for new information.
Take babies for example, as cute as they are, they are pretty much as dumb as they come. They will fail to perform even the most remedial of tasks. But as they are motivated to interact with the world, they will fail miserably time and time again, even if at an unconscious level they understand some of their own failures, and as experience accumulates they will start to tune the right strength of the muscles, the right time to pull each muscle relative to one another, they will start to do less of the things that fail and more of the things that succeed. Eventually human babies can become grown adults, capable of operating cars, hold jobs, and post dumb shit on the internet.
Yes we do have the benefit of allot of things that are innate, and some of those develop as we grow, biology does give us allot, but that only goes so far.
Take for example something as intuitive as speaking, we can identify parts of our brain that are crucial for the formation of speech, and if damaged I would not be able to talk, even tough I can still breath and move my mouth and tongue. None the less I'm reminded that right now I'm pressing buttons on a piece of plastic so that certain images appear on screen, and this images reminds me of sounds that play in my head in a language foreign to my own. Language is an extremely cultural thing, and it must be acquired from experience, biology alone is not enough.
Take as another example of learning to master a completely new skill. I think everybody has experienced how that feels like, you start by sucking really bad at it, it is difficult to do even the most basic of tasks, but with each failure and success you start to get "the hang of it", until you eventually get good at it, sometimes even so good that you can't remember how is it to suck (but suck you did).
But if you are asked to then explain the task you mastered to a novice, you will struggle, even if you can explain what is it you are trying to do and why you do something a certain way, often times the real way you yourself do it is "because it feels right" and not because of some logical deduction actively happening in your head.
On the current state of AI (although we start to see a shift from this) the "goals" are still pretty much set in stone by the initial programmer (it is not corrected to account for new information) the utility (or evaluation) function is also defined in advance often not telling you the direction where you more likely need to correct, and all this previous programs plus the program that does the tuning of the AI is a separate entity from the program does the task itself. In living organisms this processes are much more fluid.
Humans have been around longer and have the benefit of certain processes of learning that we ourselves don’t understand or even realize that they exist such that we can give it to robots.
But as we learn how to make better algorithms, and understand what new modules and features that we can add which makes learning (or understanding) things much more efficient and much more meaningful, computers will very quickly close that gap.
And the end of the day, our brains are nothing more than meet computers, in principle there is nothing it does that we can't do with silica.
Now here is where we start to kick things up a notch. Artificial brains have advantages that our meat brains do not.
For example you can very easily switch parts and upgrade artificial brains, dig in it innards and change things. And although this kind of possible to do in our meat brains, it is not something you want to do, not only because it is physically more difficult, if you make a mistake you can rollback time in an artificial brain while in your meat brain there is no undo button.
Computer brains also have the benefit of literally having access to literally a computer level of resources and reliability; imagine the experience of having gone through your math class while having access to an advanced calculator in your brain, actually imagine having that ability right now.
Computers now can also run extremely fast, and you can get a whole bunch of them to work together and expand its available resources for one collective task.
This ability to run AI in hyper-timing with hyper-resources is very powerful; you can run through hundreds of brains. Each brain can run through to a vast number of tests (“experience”) in a blink of an eye. Several orders of magnitude than what a human could ever do in a life time.
Sure human brains have benefit of having more sophisticated systems than current machine brains, but machines can still be able to surpass humans because the systems that they do have can be made really really good. At least better than what evolution could have done so far to humans.
Now the obvious question is, since the current state of AI is such that it is still can’t match the complexity of the human brain in terms of versatility is there a way to cheat, by lets say, oh copy a human brain into a virtual world?
The answer may not surprise you is yes. It is not that we don’t know how to do it, the only thing that is stopping us from doing that right now is because the human brain is quite complex, too complex for typical computers.
But with enough resources and motivation, combined with some innovation in computing performance, it will happen. It is not matter of if but when.
And this is where things start to get really scary. Because this raises so many difficult ethical, existential, legal, political concerns than we can answer right now, or even hope to answer in the near future. If there is a point in our development where our technology surpasses our ability to grow to appoint that is critically detrimental to human beings, this is it. We are simply not prepared for any of this. If you think humanity has problems now, you have seen nothing.
Sorry, this is already a long post, and I’m nowhere near done. So I think I will break this done into posts over time.
I will try to expand on some of those ethical, political concerns on the following posts.