• Welcome to League Of Reason Forums! Please read the rules before posting.
    If you are willing and able please consider making a donation to help with site overheads.
    Donations can be made via here

Computer's computing more than the human brain

arg-fallbackName="Master_Ghost_Knight"/>
Ozymandyus said:
Just because I personally cannot design a plane does not mean it cannot be done.
I do not claim that it can't be done, what I do claim is that no one knows how to do it, neither will they know how to anytime soon.
Ok you don't know how to make a plane, but if someone else knows then planes can be made, but if no one knows how to make a plane nobody will build it.
Ozymandyus said:
They seem to think its possible, and are quite a bit more informed about the limitations of software and hardware than you seem to be... so I'm going to trust them on this. You can be among the people that laughed at the Orville brothers if you want though.
I have designed computers from the ground up, all you know how to do is to assemble parts on already build hardware, if you think that you are more informed about its limitations than me then please be kind to explain them.
 
arg-fallbackName="Ozymandyus"/>
Master_Ghost_Knight said:
I do not claim that it can't be done, what I do claim is that no one knows how to do it, neither will they know how to anytime soon.
Ok you don't know how to make a plane, but if someone else knows then planes can be made, but if no one knows how to make a plane nobody will build it.
For the last time: there are people DO know how some of this stuff can be done, and they ARE building it. You making the claim that 'they will not know how to anytime soon' is based on incomplete understanding of what is being done in the field of artificial intelligence.
I have designed computers from the ground up, all you know how to do is to assemble parts on already build hardware, if you think that you are more informed about its limitations than me then please be kind to explain them.
I've already explained my position(in that very quote): I am not more informed but other people that ARE more informed are working on the problems and believe they have solutions - I trust Them. Have you ever worked with neural networks or artificial intelligence of any kind? Have you even read the literature on such things? Then why are you so staunch in your position? Hardware limitations are not insurmountable. Well designed software can simulate hardware that does not have those limitations, it just takes many times the processing power.
 
arg-fallbackName="ImprobableJoe"/>
Master_Ghost_Knight said:
I do not claim that it can't be done, what I do claim is that no one knows how to do it, neither will they know how to anytime soon.
Ok you don't know how to make a plane, but if someone else knows then planes can be made, but if no one knows how to make a plane nobody will build it.
No one knew for sure how to build a plane until the moment someone did it. There were designs for centuries, and varying degrees of successful trials over decades before the "official" first flight. Unless you have a crystal ball, you can't simply declare "neither will they know how to anytime soon."
 
arg-fallbackName="Master_Ghost_Knight"/>
Ozymandyus said:
For the last time: there are people DO know how some of this stuff can be done, and they ARE building it. You making the claim that 'they will not know how to anytime soon' is based on incomplete understanding of what is being done in the field of artificial intelligence.

I've already explained my position(in that very quote): I am not more informed but other people that ARE more informed are working on the problems and believe they have solutions - I trust Them. Have you ever worked with neural networks or artificial intelligence of any kind? Have you even read the literature on such things? Then why are you so staunch in your position? Hardware limitations are not insurmountable. Well designed software can simulate hardware that does not have those limitations, it just takes many times the processing power.

facepalm.jpg


Ok If they are doing it, then let's make a bet, if they complete it in the next 2 years You can hummiliate me publicly. If they don't I will humilliate you. Wana bet?
ImprobableJoe said:
No one knew for sure how to build a plane until the moment someone did it. There were designs for centuries, and varying degrees of successful trials over decades before the "official" first flight. Unless you have a crystal ball, you can't simply declare "neither will they know how to anytime soon."
No! That is a myth.
 
arg-fallbackName="Ozymandyus"/>
What specifically is it you want them to 'complete' for us to agree to this humiliation? Because honestly, they have already achieved most of what I've been talking about. So I have no problem taking such a bet, even if a 2 year time frame is quite a bit short of what I stated from the outset: within 10-20 years.

You are clearly completely unable to read stuff for yourself about this issue, and are so caught up with the hardware limitations that you have to deal with that you are unaware of the emergent processes that can be achieved by software. It is the software that is mimicking the complicated connectivity and behavior of neural pathways, and can self adjust and form new connections to other pieces of software.

If we must make analogies between neural pathways and circuits... The hardware is more like the DNA of the neuron, it performs simple translations and ultimately is the source of all the complexity, but is in itself very limited in its processes. The low level software can be aptly compared to the RNA, enzymes, etc that express the DNA on the level of each cell... and then upper level programs acting as the neural pathways that perform incredibly complex processes that cannot be envisioned from the simple coding of DNA. Such programs can integrate signals from multiple lower level programs and can self-adjust based on input. They can inactivate other programs, and self-organize subroutines and all of these analogous processes that neural pathways perform.

Nothing in the DNA itself seems to imply the complex working of a neural pathway, the abilities to rewire, be activated by other neurons, take in multiple sources of input and give single outputs etc... but that doesn't mean that DNA cannot be the source of all this complexity. Your problem is you look at the Circuit as the brain and the software as the thoughts... thats just not what is going on here. The SOFTWARE is the brain, and the software's Output are the thoughts.
 
arg-fallbackName="Master_Ghost_Knight"/>
Ozymandyus said:
What specifically is it you want them to 'complete' for us to agree to this humiliation? Because honestly, they have already achieved most of what I've been talking about. So I have no problem taking such a bet, even if a 2 year time frame is quite a bit short of what I stated from the outset: within 10-20 years.

My point was never "you can't do it period". It was that "you don't know how to today", and 10 to 20 years says just that. And 10 to 20 years is the kind of statistics that you would like to avoid, because you really can't predict when the next breaktrough will occur.
Ozymandyus said:
You are clearly completely unable to read stuff for yourself about this issue,
It is not because I don't read stuff, but because the stuff that I know contradicts what you say and not the sources you provide.
Ozymandyus said:
and are so caught up with the hardware limitations that you have to deal with that you are unaware of the emergent processes that can be achieved by software. It is the software that is mimicking the complicated connectivity and behavior of neural pathways, and can self adjust and form new connections to other pieces of software.
Hardware is your limit, software is just a piece of data that says which hardware function is called next, that's it. If my hardware misses an "and" function, you are unable to do any "and" by a direct means, and if your remove "not" and "or" you can't do "and" no matter how ingenious your software is.

Ozymandyus said:
If we must make analogies between neural pathways and circuits... The hardware is more like the DNA of the neuron, it performs simple translations and ultimately is the source of all the complexity, but is in itself very limited in its processes. The low level software can be aptly compared to the RNA, enzymes, etc that express the DNA on the level of each cell... and then upper level programs acting as the neural pathways that perform incredibly complex processes that cannot be envisioned from the simple coding of DNA. Such programs can integrate signals from multiple lower level programs and can self-adjust based on input. They can inactivate other programs, and self-organize subroutines and all of these analogous processes that neural pathways perform.

Nothing in the DNA itself seems to imply the complex working of a neural pathway, the abilities to rewire, be activated by other neurons, take in multiple sources of input and give single outputs etc... but that doesn't mean that DNA cannot be the source of all this complexity. Your problem is you look at the Circuit as the brain and the software as the thoughts... thats just not what is going on here. The SOFTWARE is the brain, and the software's Output are the thoughts.
Bad comparison, Hardware is the actual machine were the states are physically encoded and events representative of operations take place (that would be the brain, DNA would be like the blue print). Software is the set of instructions that tells you the order in which those events take place (that would be your memory).
 
arg-fallbackName="Zylstra"/>
Ozymandyus said:
The problem isn't that I don't understand how computer programs works, which I do... It's that you have no real understanding of how the human brain works. It is basically programmed in the exact way we program computers (with on's and offs and built in pattern recognition software and storage.)


That's simply not true. computers do nothing but move electons. The electro-chemical systems of our brains are far more complex. Can the mind ever truly know its own nature? We'll see

a
 
arg-fallbackName="Ozymandyus"/>
Zylstra said:
That's simply not true. computers do nothing but move electons. The electro-chemical systems of our brains are far more complex. Can the mind ever truly know its own nature? We'll see
To say that computers do nothing but move electrons is no more apt than saying biological entities do nothing but form and break molecular bonds.
 
arg-fallbackName="Zylstra"/>
Ozymandyus said:
To say that computers do nothing but move electrons is no more apt than saying biological entities do nothing but form and break molecular bonds.
Computers are predictable and impossible of randomness. They are predictable unlike any macroscopic organism
 
arg-fallbackName="Spase"/>
Hm.

I can see both sides of this argument. There are limitations on what the hardware can do, but the question is whether those limitations are more constraining than the physical limitations that biochemistry imposes on our brains.

Master Ghost Knight, my thinking is this; while it's true that software doesn't allow us to ignore hardware limitations it does allow us to put a sort of masking over the simple instructions that are underneath it all. What are people other than just a super-complex system masking a series of simple underlying reactions? I'm not convinced that hardware is as limiting as you're claiming.. at least not limiting because of it's limited instruction set. Maybe I'm misunderstanding this part of your argument; if so feel free to correct me.

A side comment on the program that builds variations on a composers work in their style... this doesn't seem as impressive when you consider that we have a very solid mathematical representation for music and permutations of formula is something that computers are pretty good with. I must admit that haven't read the article yet though (I'm about to rush off to class).

Ozy.. I agree that this sort of computer intelligence is coming but I do agree with MGK's main point that we don't have a solid idea of what intelligence is, or if we do, how to represent it in a way that fits our current tools. I'm not an AI specialist but I do use trained neural nets for predictions (protein structures) and while they aren't the most complex out there (we just use feed forward stuff) they have real limitations. They aren't little nets either, training one takes weeks of CPU time on our campus cluster, but they still can't do a lot of what people are talking about intelligent computers doing. The underlying problem is we don't know what we're shooting for yet because we don't know what intelligence really means.

Last time I read about these AIs that are capable of coming up with new experiments and testing hypotheses it was really just the computers going through permutations of what works and doesn't and picking experiments that related to things that were previously done. I actually have some videos on AI that I've been meaning to watch but so far in all the reading I've done I haven't seen anything looking like the breakthrough that will lead to creativity. The most interesting article I've read was in that singularity issue of IEEE talking about theories of what intelligence really is.

My favorite article:
http://www.spectrum.ieee.org/jun08/6278

The rest of the issue:
http://www.spectrum.ieee.org/singularity

I don't know the answer... but reading those articles (which are relatively recent) I don't see a lot of consensus on AI being super close or even what are the fundamental things that make up consciousness. The first article I linked takes a good stab at it, and I like it a lot, but their is no claim of a solution. When I bring this up with my mom for her opinion she just shrugs and says she doubts it. Her argument is primarily, "They've been saying strong AI is right around the corner since I graduated college and they still are." She graduated with a M.S. in computer engineering from Carnegie Mellon and spent most of her time as a grad student studying the brain so she isn't completely uneducated in the area. Things have come a loooong way since she was studying it but we're still shaky on what it is that makes us tick mentally.
 
arg-fallbackName="Spase"/>
Zylstra said:
Computers are predictable and impossible of randomness. They are predictable unlike any macroscopic organism

I just wanted to respond to this quickly,

My feeling is that randomness is just how we describe a system that is too complex for us to predict. While molecular dynamics are unfathomably difficult to predict in a deterministic way, they can be predicted as a statistical system with incredible accuracy. The randomness in biological thinking machines isn't what makes us smart, it's the structure and pattern which are not random that gives us the ability to think and reason.

I think it would be very hard to argue that there is not some set of algorithms that defines the way we process information and if thought can be defined algorithmically it can be simulated. The randomness is out brains is impairing, not helpful.
 
arg-fallbackName="ebbixx"/>
Spase said:
"They've been saying strong AI is right around the corner since I graduated college and they still are." She graduated with a M.S. in computer engineering from Carnegie Mellon and spent most of her time as a grad student studying the brain so she isn't completely uneducated in the area. Things have come a loooong way since she was studying it but we're still shaky on what it is that makes us tick mentally.

Most of the interesting advances since then (it sounds like I'm roughly from your mother's generation) have shifted from intentional, and overtly controlled approaches that were popular when AI first became a "hot" subject, and moved more and more to the development and recombination of what I'll generalize hideously to call heuristic algorithms. I often wonder whether the breakthrough won't come when we decide to "grow" computers that we don't fully understand, based on biological models, perhaps interfacing them more directly with (forgive a huge oversimplification) the sort of "left brain" computational devices that represent most of the advancement so far?

I'm not convinced we can fully research and explore the more holistic, pattern seeking, generalizing and "intuitive" mental processes until we can create "brains" that we feel no reservations about dissecting and dismantling (reverse engineering, if you will?) in any and all ways imaginable. Granted, non-destructive imaging technologies also allow us to confirm or disprove conjectures and hypotheses about how many of the more "human" mental processes actually work than we could have achieved without such methods, but ethical limitations have surely blocked many investigative techniques that might have given many insights we have yet to achieve in these areas?
 
arg-fallbackName="Ozymandyus"/>
As a sidenote, I want to make it clear that I am not making an argument that computers are already conscious or already capable of precisely human thinking, nor that this is right around the corner. I am simply saying that they can already perform the sort of tasks that we have often thought only human intelligence was capable of (like derivations of formulas, supervised learning, etc.) It seemed to me that MGK was denying that a computer could teach itself new theorems or devise experiments and confirm them, which is what I was so staunchly arguing against. Such as when he said:
Master_Ghost_Knight said:
That is not true. A computer will never deduct a theorem that isn't already hardwired.

Here's one example...
http://www.wired.com/wiredscience/2009/04/newtonai/

Or this

http://news.cnet.com/robo-scientist-makes-gene-discovery-on-its-own/?tag=newsLatestHeadlinesArea.0
 
arg-fallbackName="Master_Ghost_Knight"/>
Ozymandyus said:
As a sidenote, I want to make it clear that I am not making an argument that computers are already conscious or already capable of precisely human thinking, nor that this is right around the corner. I am simply saying that they can already perform the sort of tasks that we have often thought only human intelligence was capable of (like derivations of formulas, supervised learning, etc.) It seemed to me that MGK was denying that a computer could teach itself new theorems or devise experiments and confirm them, which is what I was so staunchly arguing against. Such as when he said:


Here's one example...
http://www.wired.com/wiredscience/2009/04/newtonai/

Or this

http://news.cnet.com/robo-scientist-makes-gene-discovery-on-its-own/?tag=newsLatestHeadlinesArea.0
Interesting, but I'm still not quite convinced yet.

P.S. None of those cases are theorems. :p
 
arg-fallbackName="scalyblue"/>
Master_Ghost_Knight said:
Bad comparison, Hardware is the actual machine were the states are physically encoded and events representative of operations take place (that would be the brain, DNA would be like the blue print). Software is the set of instructions that tells you the order in which those events take place (that would be your memory).

In the brain, the hardware are neurons, each capable of exciting one or more neurons, being excited by one or more neurons, and stopping a chain of excitement. The 'software' is not memory, but a pattern of the connections between those neurons. Hence the software in the brain is more like firmware.

Microprocessors are fundamentally based on "gates" that are essentially nested, microscopic, solid-state relays.

In order to make a facsimile of a human brain in an IC, one would need to develop a method of handling electrical impulses in a nonlinear manner, which would really require us to develop a new sort of electronics that may or may not be beneficial.

In order to make a facsimile of a human brain with multiple ICs, one would only need to replicate the number of, and behavior of neurons in a human brain through process threading and communications protocols.

Writing a program to replicate the function of a neuron in a CPU thread would be simple fare. Writing a protocol that can efficiently connect that thread to other threads is a bit harder, but doable. Replicating these simple processes with the scale of complexity in the human brain is unthinkable with today's technology. Most estimates of the number of neurons in the human brain are somewhere between 10-100 billion neurons, and we have just managed to put about 1 billion computers on the face of the planet. We have a bit of a way to go.

-edit- not to mention that the signaling in the brain has a variable speed of propagation, as it is not purely electrical, but electrochemical, and that propagation is part of the functionality.
 
arg-fallbackName="Blasted"/>
As one who conducts research in computational biology and applies machine learning to genomics, I kind of feel obligated to respond to the statement that computers are incapable of doing anything outside of what we instruct them.

First of all, in even the most restrictive interpretation possible, this is demonstrably false: languages such as Prolog and LISP are particularly unique in that one can write programs in these languages that allow computers to generate new code for themselves to adapt to various situations.

Secondly, you are being unreasonably restrictive on what "intelligence" should mean. To me (as one who actually works in this field, mind you), intelligence means that I can give the computer some information and expect the computer to make inferences and draw conclusions from this data that I did not give it. Genetic algorithms are a striking example of this. For a publication example, you can look up the use of genetic algorithms in determining bipedal gait. The creation of new knowledge by a computer in this instance is self evident: If the programmer already knew how to model optimal bipedal gait for his robots, he would not have needed the genetic algorithm in the first place.

For a more striking visual example, check out some of the youtube videos, ie http://www.youtube.com/watch?v=0XMInJeN3co where a virtual creature was evolved for jumping ability, and then for linear motion ability, and the end result was a creature that adapted its ability to jump to move around.

To argue that something on this magnitude is "instructions given to the computer" is a bit silly: the information given to it is insignificant compared to the magnitude of the information produced by the computer.

In essence, the computer has performed research: it has taken knowledge given to it (knowledge of how, given certain artificial pressures (ie "I want creatures that jump higher") to evolve this creature rapidly to optimally fit this pressure) and produced entirely new knowledge. And if research isn't considered intelligence/creating new knowledge, then I really don't know what is.
 
arg-fallbackName="pdka2004"/>
Think of it like the difference between playing games on your x-box and your pc.

Your xbox is able to give better graphics and higher speeds during gameplay, using a smaller processor and graphics card than your pc because it is dedicated to performing only one task.

Your pc is not able to play games so well because it is continually performing other tasks while the game is running. Multiply that difference in operations by about a billion and you start to understand why it can sometimes appear that a computer is able to perform certain tasks more quickly than a brain
 
Back
Top