• Welcome to League Of Reason Forums! Please read the rules before posting.
    If you are willing and able please consider making a donation to help with site overheads.
    Donations can be made via here

Discussion for AronRa and OFNF exclusive thread

arg-fallbackName="Rumraket"/>
dandan said:
Rumraket said:
Here they use ancestral sequence reconstruction to resurrect ancestral versions of the ATP-synthase complex to elucidate how they function and what kind of evolution they subsequently underwent. Notice how, if the ATP-synthase complex did not evolve, we would have no particular reason to expect these resurrected ancestral versions to happen to be functionally capable ATP-synthase motors. In this context, this kind of ancestral sequence reconstruction therefore constitutes a concrete empirical TEST of the postulate that the ATP-synthase evolved from a simpler ancestral stage.

I obviously don’t agree with the idea that it “evolved” from a simpler stage
Too bad, that's what the evidence shows. Do you know how ancestral sequence reconstruction works?
dandan said:
, but fot he sake of simplicity let’s pretend that it did. Lets pretend that common ancestry is true and that all ATP motors came from a simpler ancestor…
The question is ¿how do you know that Darwinian mechanisms where responsible for the creation of an ATP Motor? How do you know that it could have been created by random genetic changes + natural selection?
The algorithm that infers ancestral stages assumes empirically established mechanisms of mutation and selection. If that is not what really took place in the past, it would seem remarkably lucky for the algorithm to nevertheless manage to reproduce a functional and simpler ATP-synthase, don't you think?

How do YOU explain that?
dandan said:
In order to do that you have to provide the mutations (or steps) that would produce an ATP motor and then show that each step produced an advantage.
Technically, the article I linked does show some of those mutations. Did you actually read or did you just post ind blindness? I ask because your response shows a remarkable lack of understand of what you should have supposedly read.

In any case I actually disagree that I have to show what you're asking for. That is not required in order to have sufficient evidential justification for accepting that the ATP-synthase evolved according to known evolutionary processes. I have used this analogy now several times, but you show no signs of "getting it". I don't have to show where each foot is planted to have good justification for believing that a man can walk across Russia.
dandan said:
Up to this point, a Lamarkist a mutatonalist or even someone like Michal Behe who believes in “guided mutations”could use the same evidence that you presented.
Please elaborate on how an algorithm that assumes darwinian evolution can be used by a lamarckist?

I don't care about Michael Behe's silly belief that an invisible man is "guiding" mutations into place. That is a useless and unfalsifiable statement. It simply isn't needed to explain our observations. If Behe wants to believe that, so be it. I don't have that need, so I don't feel I have to tell him that it could not possibly have happened. All I can say to that is that he's complicating the picture by adding unseen and unnecessary magical entities into it.
dandan said:
Rumraket said:
Those numbers have already been worked out, go back and read them
.
Translation: I have no idea, I simply invented “10 trillion mutations”
No, articles by Lynch and others have already been linked in this thread. It is not my problem you have conveniently forgotten about it or ignored it.
dandan said:
Rumraket said:
So the kind of design you really believe in is by definition unfalsifiable? Good to know
Sure, I have no problem in admitting that no organ (real or hypothetical) could be too complex to have been created by God. The problem is that Darwinism is under the same situation, but you won´t admit it
Another silly statement, since I have already told you how it could be falsified several times.

Dandan, seriously, you're not here to have a discussion and consider our arguments. You are obviously only here to regurgitate standard propaganda and assert your pre-concieved conclusions over and over again.
 
arg-fallbackName="Dragan Glas"/>
Greetings,
dandan said:
DRAGAN
You're misunderstanding complexity.

In order to design software, you have to be able to have the complete picture, as it were, in your mind - in other words, you are more complex than what you create. [As someone who started out as a programmer, I know what I'm talking about here.]

Having "simpler" brains would mean that they wouldn't have the ability to design/create something more complex - their "simpler" brains couldn't cope with the complexity(!)

So is there a magical law that states that an individual can´t create something more complex than himself? I challenge you to quote from a scientists or even a philosopher who makes that statement. (something peer reviewed would be better)
There is no "law" - "magical" or otherwise - that says it's impossible. Rather, experience of the real world shows us that no simple system has designed a more complex system, although a complex system can evolve from a simpler system.

In the development of artificial intelligence, computer scientists have designed small programs which can then act in concert to produce more complex behaviours - an example of this is how BT (in the UK) handles connecting phone calls. Originally, humans did this, but the system became so complex over time, that they had to delegate this to computer programs. The problem was, if humans couldn't do it, how could they write the software to do it? So, they wrote small programs - "ants" - to work together and solve the problem using simple rules. It worked in that these programs solved the problems more efficiently than the original human operators.

A more complex system (behaviour) "evolved" out of simpler parts ("ants") working together.

But the point I'm making is that the programmers developed a simple program - designed/created something simpler than themselves - to accomplish a more complex task.

Can you show that what you're claiming is true? That a simpler system can design - rather than evolve - a more complex system?
dandan said:
Nothing more - no agency was mentioned as being a necessary aspect of his definition

Agency was is not mentioned in Debskis definition ether, I challenge you to copy-paste a quote from Demski where he uses agency as part of his definition of specified complexity, please show me the actual quote. not just a link to a random article

This sentence has many letters (complexity)

This sentence has a meaning (specificity)

Therefore this sentence is specified and complex

Note how agency is not part of the definition; however I do believe that agency is the best explanation for the cause of this sentence ¿see the difference?
As already pointed out in Schneider's article, which I've posted several times, Dembski's use of his terms are arbitrary, ambiguous and interchangeable - and is, as a result, completely useless; but I suppose you'll dismiss Schneider's as a "random article", despite the fact that I and others here would call it a "relevant article".

In a number of Dembski's articles on specified complexity, he has stated that the only known explanation for his "specified complexity" involves intelligence:
[url=http://www.metanexus.net/essay/explaining-specified-complexity said:
Explaining Specified Complexity[/url]"]But this raises the obvious question, whether there might not be a fundamental connection between intelligence or design on the one hand and specified complexity on the other. In fact there is. There's only one known source for producing actual specified complexity, and that's intelligence. In every case where we know the causal history responsible for an instance of specified complexity, an intelligent agent was involved. Most human artifacts, from Shakespearean sonnets to D=FCrer [sic] woodcuts to Cray supercomputers, are specified and complex. For a signal from outer space to convince astronomers that extraterrestrial life is real, it too will have to be complex and specified, thus indicating that the extraterrestrial is not only alive but also intelligent (hence the search for extraterrestrial intelligence-SETI).
In the following two paragraphs, he then plays coy, asking if there is "specified complexity" in Nature, then posits that Behe's "irreducible complexity" - if true - means that:
... a door is reopened for design in science that has been closed for well over a century.
As Elsberry notes in A response to Dembski's "Specified Complexity":
By the definitions that Dembski lays out in his book, "The Design Inference", the complexity of an event is derived from a probabilistic analysis of the event given that a chance process produced that event. In "Explaining Specified Complexity" and "Specified Complexity", Dembski now tells us that the relevant complexity measure must be taken instead upon the probabilistic analysis of the event given a non-chance hypothesis.
[Emphasis in original.]

Indeed, his second book's title clearly implies the need for agency:

No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence

And, again, elsewhere he continues to imply the need for agency:
[url=http://www.discovery.org/a/10 said:
Why Evolutionary Algorithms Cannot Generate Specified Complexity[/url]"]A full treatment will have to await a book I'm currently writing (Redesigning Science: Why Specified Complexity Is a Reliable Empirical Marker of Actual Design). But I want to make these preliminary results available because the misconception that one can purchase specified complexity on the cheap is widespread and ill-conceived.

The only known generator of specified complexity that we know is intelligence. Sans intelligence, a process that yields specified complexity merely converts already existing specified complexity.
In other words, as far as Dembski is concerned, agency is always required for "specified complexity".

Kindest regards,

James
 
arg-fallbackName="Dave B."/>
dandan said:
The author of the paper doesn’t explain how the ATP motor evolved nor did he even attempt to do so.
Just like aronra and everybody else, you are just posting random articles
I don't recall claiming that this paper would explain how the ATP motor evolved. I asked you to read it and let me know if you had any questions.

Can we agree that proton gradients are necessary for ATP synthases to function properly? And can we also agree that this paper does a decent job of explaining the origin of these gradients?
 
arg-fallbackName="dandan"/>
RUMRAKET
In any case I actually disagree that I have to show what you're asking for. That is not required in order to have sufficient evidential justification for accepting that the ATP-synthase evolved according to known evolutionary processes. I have used this analogy now several times, but you show no signs of "getting it". I don't have to show where each foot is planted to have good justification for believing that a man can walk across Russia.

But THE difference is that one can show a step by step path that would allowed a man to cross Russia.
The algorithm that infers ancestral stages assumes empirically established mechanisms of mutation and selection. If that is not what really took place in the past, it would seem remarkably lucky for the algorithm to nevertheless manage to reproduce a functional and simpler ATP-synthase, don't you think?

Here is the thing, I would ask you to explain the algorithm and prove that this algorithm represent reality.
But If I do that you will end up answering something completely different, and if I ever ask you again that algorithm you will say “I already told you”
Another silly statement, since I have already told you how it could be falsified several times.

I was expecting a serious answer… not something ridiculous as your “10 trillion” mutations
 
arg-fallbackName="dandan"/>
DRAGAN
There is no "law" - "magical" or otherwise - that says it's impossible. Rather, experience of the real world shows us that no simple system has designed a more complex system, although a complex system can evolve from a simpler system.

In the development of artificial intelligence, computer scientists have designed small programs which can then act in concert to produce more complex behaviours - an example of this is how BT (in the UK) handles connecting phone calls. Originally, humans did this, but the system became so complex over time, that they had to delegate this to computer programs. The problem was, if humans couldn't do it, how could they write the software to do it? So, they wrote small programs - "ants" - to work together and solve the problem using simple rules. It worked in that these programs solved the problems more efficiently than the original human operators.

A more complex system (behaviour) "evolved" out of simpler parts ("ants") working together.

But the point I'm making is that the programmers developed a simple program - designed/created something simpler than themselves - to accomplish a more complex task.

Can you show that what you're claiming is true? That a simpler system can design - rather than evolve - a more complex system?

Sure, the human genome is made out of 3 billion DNA letters, so any book with more than 3 billion letters would be more complex than his creator….as simple as that. (this according to dawkins definition of complex)

You are basing your argument on the illusion that the creator has to be more complex than the creation, obviously you haven´t presented any evidence for such assertion, all you have is your personal instinct, and your atheist friends repeating that falsehood over and over again.

In a number of Dembski's articles on specified complexity, he has stated that the only known explanation for his "specified complexity" involves intelligence

True, but that is not what you said earlier…you said that agency was part of the definition of specified complexity, implying that the argument was circular.

So basically Debski claims that specified complexity (or complexity) can only come from a mind and Dawkins says that specified complexity (or complexity) can also come from evolution.

They both mean the same thing when they use the term specified complexity / complexity the simply disagree on the interpretation of such pattern.
Can we agree on this point and quit these ridiculous word games
 
arg-fallbackName="dandan"/>
Dave B. said:
I don't recall claiming that this paper would explain how the ATP motor evolved. I asked you to read it and let me know if you had any questions.

Can we agree that proton gradients are necessary for ATP synthases to function properly? And can we also agree that this paper does a decent job of explaining the origin of these gradients?
yes we can agree
 
arg-fallbackName="Dragan Glas"/>
Greetings,
dandan said:
DRAGAN
There is no "law" - "magical" or otherwise - that says it's impossible. Rather, experience of the real world shows us that no simple system has designed a more complex system, although a complex system can evolve from a simpler system.

In the development of artificial intelligence, computer scientists have designed small programs which can then act in concert to produce more complex behaviours - an example of this is how BT (in the UK) handles connecting phone calls. Originally, humans did this, but the system became so complex over time, that they had to delegate this to computer programs. The problem was, if humans couldn't do it, how could they write the software to do it? So, they wrote small programs - "ants" - to work together and solve the problem using simple rules. It worked in that these programs solved the problems more efficiently than the original human operators.

A more complex system (behaviour) "evolved" out of simpler parts ("ants") working together.

But the point I'm making is that the programmers developed a simple program - designed/created something simpler than themselves - to accomplish a more complex task.

Can you show that what you're claiming is true? That a simpler system can design - rather than evolve - a more complex system?
Sure, the human genome is made out of 3 billion DNA letters, so any book with more than 3 billion letters would be more complex than his creator….as simple as that. (this according to dawkins definition of complex)
No - that is not according to Dawkins' definition.

Let me remind you of the criteria:

Heterogeneity, non-random and "proficiency" (including reproduction).

Your book analogy fails Dawkins' definition.
dandan said:
You are basing your argument on the illusion that the creator has to be more complex than the creation, obviously you haven´t presented any evidence for such assertion, all you have is your personal instinct, and your atheist friends repeating that falsehood over and over again.
In the same way that all causes of which we are aware in Nature are naturalistic in origin, so too we have observed that all intentional designs are the result of more complex designers.

I've given you one example - the "ants" programs - which appears to be more complex only because of the evolved behaviour. In fact, the programs themselves are simpler than their programmers.
dandan said:
In a number of Dembski's articles on specified complexity, he has stated that the only known explanation for his "specified complexity" involves intelligence
True, but that is not what you said earlier…you said that agency was part of the definition of specified complexity, implying that the argument was circular.
Agency is intrinsic to his definition of "specified complexity", therefore it is a circular argument.

Since agency is intrinsic to it, and he uses "specified complexity" to "discover" design - as Schneider pointed out in his article, which you don't seem to have read and/or understood this fact - it is a circular argument:
(Note: the concept of "specified" is the point where Dembski injects the intelligent agent that he later "discovers" to be design! This makes the whole argument circular. Dembski wants "CSI" rather than a precise measure such as Shannon information because that gets the intelligent agent in. If he detects "CSI", then by his definition he automatically gets an intelligent agent. The error is in presuming a priori that the information must be generated by an intelligent agent.)

[...]

According to Dembski, the existence of "specified complexity" always implies an "intelligent" designer.
The reason it's intrinsic to his definition is that his whole intention is to insert "God" into the mix.

It is all sophistry.
dandan said:
So basically Debski claims that specified complexity (or complexity) can only come from a mind and Dawkins says that specified complexity (or complexity) can also come from evolution.

They both mean the same thing when they use the term specified complexity / complexity the simply disagree on the interpretation of such pattern.
Can we agree on this point and quit these ridiculous word games
As I've already pointed out, they don't mean the same thing - it's not just their interpretations that differ, their definitions use different criteria. As Schneider noted in the above article, Dembski uses his made-up "CSI" - Dawkins is using Shannon.

How can they then "mean the same thing"?

Kindest regards,

James
 
arg-fallbackName="Rumraket"/>
dandan said:
Rumraket said:
In any case I actually disagree that I have to show what you're asking for. That is not required in order to have sufficient evidential justification for accepting that the ATP-synthase evolved according to known evolutionary processes. I have used this analogy now several times, but you show no signs of "getting it". I don't have to show where each foot is planted to have good justification for believing that a man can walk across Russia.
But THE difference is that one can show a step by step path that would allowed a man to cross Russia.
Which is irrelevant to the point, because it's still not strictly needed to know that it is possible.
dandan said:
Rumraket said:
The algorithm that infers ancestral stages assumes empirically established mechanisms of mutation and selection. If that is not what really took place in the past, it would seem remarkably lucky for the algorithm to nevertheless manage to reproduce a functional and simpler ATP-synthase, don't you think?
Here is the thing, I would ask you to explain the algorithm and prove that this algorithm represent reality.
Maybe you should actually bother to read up on it yourself, instead of sitting back going nuh-uh until everything is force-fed to you by spoon.

Here, read this:
http://www.bx.psu.edu/miller_lab/dist/11_Blanchette.pdf
3.1. Predicting Ancestral Sequences
The prediction of ancestral genomes can be divided into four main steps. A crucial first step toward the reconstruction is to build an accurate multiple alignments of the extant orthologous sequences, thus establishing orthology relationships among the nucleotides of each sequence. Second, the process of indel reconstruction determines the most likely scenario of insertions and deletions that may have led to the extant sequences. Third, substitution history is reconstructed using a maximum likelihood approach. The last step involves dealing with genome rearrangements (inversions, transpositions, translocations, duplications, and chromosome fusions, fissions, and duplications).

Basically the algorithm compares a large set of similar proteins or DNA sequences (assumed to be homologous) and produces a phylogenetic tree, reconstructs the most probable ancestral state from which they evolved given certain assumed standard evolutionary mechanisms (such as certain mutation biases and the like).
3.1.3. Substitutions Reconstruction
After having established which positions of the multiple alignment correspond to bases in the ancestor, the inferAncestors program predicts which nucleotide (A, C, G, or T) was present at each position in the ancestor using the standard posterior probability approach (24)based on a dinucleotide substitution model in which substitutions at two adjacent positions are independent except for CpG, whose substitution rate to TpG is 10 times higher than those of other transitions (25). This phase of the reconstruction relies on the availability accurate branch length estimates for the phylogenetic tree, which can be obtained as described under Subheading 2.2.

The "reality" of the results the algorithm comes up with is supported by the concrete functionality of it. With sufficiently good data, we can reconstruct ultra-ancient proteins (some of them over 3.5 billion years old) that no longer exist, and determine that they work and how they work.

This is therefore a test of the algorithm(and therefore a test of it's assumptions), because if the algorithm assumed something false, we'd have no reason to expect those reconstructed proteins to actually function.
dandan said:
But If I do that you will end up answering something completely different, and if I ever ask you again that algorithm you will say “I already told you”
It isn't my problem that you consistently fail to understand what is plainly written to you.
dandan said:
Another silly statement, since I have already told you how it could be falsified several times.

I was expecting a serious answer… not something ridiculous as your “10 trillion” mutations
There was nothing ridiculous about my answer. It is a categorical fact that an ATP-synthase requiring 10 trillion simultaneous mutations could not have evolved by known evolutionary processes. You got what you asked for, you are apparently disappointed that this question of yours wasn't the "gotcha" you had hoped for.
 
arg-fallbackName="Dave B."/>
dandan said:
Dave B. said:
I don't recall claiming that this paper would explain how the ATP motor evolved. I asked you to read it and let me know if you had any questions.

Can we agree that proton gradients are necessary for ATP synthases to function properly? And can we also agree that this paper does a decent job of explaining the origin of these gradients?
yes we can agree
Great. Thank you.

Here is another paper I would like you to read that explains the evolution of the sub-units of the ATPases. Again, please let me know if you have any questions.

http://jeb.biologists.org/content/172/1/137.full.pdf

If you can agree that this article provides a plausible explanation for the evolution of the sub-units of ATPases then we'll continue.

By the way, this paper does mention some of the points made by Rumraket so I hope you've also read and understood the papers he has referenced so far.
 
arg-fallbackName="dandan"/>
Dragan Glas said:
Greetings,

No - that is not according to Dawkins' definition.

Let me remind you of the criteria:

Heterogeneity, non-random and "proficiency" (including reproduction).

Your book analogy fails Dawkins' definition.[

Does a books fails to have any of these attributes?

I am not familiar with the term “proficiency" and diccionaries didn´t help me.

Are you honestly holding the position that it is 100% impossible to find Aliens that are more intelligent than humans, but that are simpler in their genetic make-up? Are you saying that an intelligent but simple Alien is as impossible as a circle with corners?
What you have to do is present evidence, ether scientific or philosophical that proves that complexity is a necessary attribute for “intelligence” you hold the burden proof.
(Note: the concept of "specified" is the point where Dembski injects the intelligent agent that he later "discovers" to be design! This makes the whole argument circular. Dembski wants "CSI" rather than a precise measure such as Shannon information because that gets the intelligent agent in. If he detects "CSI", then by his definition he automatically gets an intelligent agent. The error is in presuming a priori that the information must be generated by an intelligent agent.)

[...]

According to Dembski, the existence of "specified complexity" always implies an "intelligent" designer.


Yes and according to me, the existence of a watch implies the existence of a watch maker, but that doesn’t mean that “watch maker” is part of the definition of watch.

The concept of specified complexity doesn’t imply priory the existence of a designer, but rather a designer is claimed to be the best explanation for specified complexity.
As I've already pointed out, they don't mean the same thing - it's not just their interpretations that differ, their definitions use different criteria. As Schneider noted in the above article, Dembski uses his made-up "CSI" - Dawkins is using Shannon.

Shannon defines complexity (or information) as the amount of bits that a system has, for example this sentence

I like Pizza

Is as complex as

U gojl ulfd

Also according to Shannon´s criteria a junk yard is more complex than an airplain , because a junk yard has more parts.
Both Dawkins and Debski would disagree.

But don´t worry, since you seem to have many problems with “creationists” terms, for now un I will try to use “complexity” “or complicated” instead.
So the creationists arguent is:

Premise 1: life is complicated

Premise 2: Complicated stuff can only come from a mind

Therefore life came from a mind.

The argument is falsifiable, all you have to do is disprove any of those premises.
 
arg-fallbackName="he_who_is_nobody"/>
dandan said:
So the creationists arguent is:

Premise 1: life is complicated

Premise 2: Complicated stuff can only come from a mind

Therefore life came from a mind.

The argument is falsifiable, all you have to do is disprove any of those premises.

Jesus-facepalm.jpg
 
arg-fallbackName="Dustnite"/>
dandan said:
Premise 1: life is complicated

Premise 2: Complicated stuff can only come from a mind

Therefore life came from a mind.

The argument is falsifiable, all you have to do is disprove any of those premises.


head-explode.gif
 
arg-fallbackName="Rumraket"/>
dandan said:
So the creationists arguent is:

Premise 1: life is complicated

Premise 2: Complicated stuff can only come from a mind

Therefore life came from a mind.

The argument is falsifiable, all you have to do is disprove any of those premises.
Premise 2 has already been falsified.
 
arg-fallbackName="Dragan Glas"/>
Greetings,

I confess that on reading your answer, dandan, I had a head-on-keyboard moment.
dandan said:
Dragan Glas said:
Greetings,

No - that is not according to Dawkins' definition.

Let me remind you of the criteria:

Heterogeneity, non-random and "proficiency" (including reproduction).

Your book analogy fails Dawkins' definition.
Does a books fails to have any of these attributes?
The point is, that in order to fulfil the definition, it must meet all the criteria - not just any or some.

A book does not meet all the criteria - therefore, it does not fulfil the definition.

The number of "letters" does not govern "complexity", as this article shows.
dandan said:
I am not familiar with the term “proficiency" and diccionaries didn´t help me.
Which leads me to believe that you didn't read and/or understand the passages from Dawkin's book that you asked me to read.

He actually explains what he means in context by the term "proficiency".
The answer we have arrived at is that complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality that is specified in advance is, in some sense, 'proficiency': either proficiency in a particular ability such as flying, as an aero-engineer might admire it; or proficiency in something more general, such as the ability to stave off death, or the ability to propagate genes in reproduction.
He is referring to functionality that can't arise through random chance - in other words, it arises naturalistically through evolution, rather than a "mind" as Dembski insists.
dandan said:
Are you honestly holding the position that it is 100% impossible to find Aliens that are more intelligent than humans, but that are simpler in their genetic make-up? Are you saying that an intelligent but simple Alien is as impossible as a circle with corners?
Still dealing in absolutes, I see.

You are trying to imply that if something isn't "100% impossible", then it's possible in the real world.

This is simply not the case.

This is to argue that if there's a non-zero probability - no matter how close to zero it is - that snapping my fingers will "turn off" the sun, then it's possible.

It is the fallacy of equating logical possibility with physical possibility.

As Margaret Drabble put it:
When nothing is sure, anything is possible.
The fact is, in the real world, there is logical possibility versus physical impossibility.

I have already shown you that apparently simpler systems - made up of simple parts ("ants") - are, in fact, more complex (behaviourally), and thus, the overall system is more complex.

In The Origin Of Our Species, Chris Stringer notes that we evolved more efficient biochemical processes in our brains, which improved our ability to process information. "More efficient" could be interpreted as "simpler" - however, as it results in a greater ability to process information, it means our brains (as systems) are more "complex".

A alien that has a lesser ability to process information is "simpler" and, as such, would be less likely to be capable of designing a more complicated system than a more intelligent alien - or human.

Ask a chimpanzee or gorilla to design something - will they do better than a human?

I am arguing that a more intelligent person is more capable than a less intelligent person. You're effectively arguing the opposite.

The size of the genome is irrelevant to its expression - however, a simpler genome has a lower capability of expression.
dandan said:
What you have to do is present evidence, ether scientific or philosophical that proves that complexity is a necessary attribute for “intelligence” you hold the burden proof.
Intelligence is itself the result of complexity - the more neurological connections in the brain, the more complex it is and the greater the intelligence. Therefore, complexity is necessary for intelligence.
dandan said:
(Note: the concept of "specified" is the point where Dembski injects the intelligent agent that he later "discovers" to be design! This makes the whole argument circular. Dembski wants "CSI" rather than a precise measure such as Shannon information because that gets the intelligent agent in. If he detects "CSI", then by his definition he automatically gets an intelligent agent. The error is in presuming a priori that the information must be generated by an intelligent agent.)

[...]

According to Dembski, the existence of "specified complexity" always implies an "intelligent" designer.
Yes and according to me, the existence of a watch implies the existence of a watch maker, but that doesn’t mean that “watch maker” is part of the definition of watch.
Are you saying that a watch does not require a watch-maker? That it can come about by chance?

I think not.

How else could Paley have used a watch as evidence of a watch-maker - a designer? Ergo, "God".

Ergo, a watch-maker is intrinsic to a watch. Similarly, a "designer" is intrinsic to Dembski's "specified complexity".

It is a circular argument.
dandan said:
The concept of specified complexity doesn’t imply priory the existence of a designer, but rather a designer is claimed to be the best explanation for specified complexity.
Yes, it does imply a priori a designer.

Dembski's writings are quite clear on that - as Schneider, Elsberry, et al have shown.
dandan said:
As I've already pointed out, they don't mean the same thing - it's not just their interpretations that differ, their definitions use different criteria. As Schneider noted in the above article, Dembski uses his made-up "CSI" - Dawkins is using Shannon.
Shannon defines complexity (or information) as the amount of bits that a system has, for example this sentence

I like Pizza

Is as complex as

U gojl ulfd
Actually, it isn't - the latter sequence has one less symbol than the first. Even more, the hidden code for bold in the first sequence represents extra symbols.
dandan said:
Also according to Shannon´s criteria a junk yard is more complex than an airplain , because a junk yard has more parts.
That depends on whether the given junk-yard actually has more parts than a given aeroplane.
dandan said:
Both Dawkins and Debski would disagree.
Because one is using a real measure of information - Shannon - the other, the sophistric CSI.
dandan said:
But don´t worry, since you seem to have many problems with “creationists” terms, for now un I will try to use “complexity” “or complicated” instead.
Thank you.
dandan said:
So the creationists arguent is:

Premise 1: life is complicated

Premise 2: Complicated stuff can only come from a mind

Therefore life came from a mind.

The argument is falsifiable, all you have to do is disprove any of those premises.
Apart from the ambiguity of the use of the term "complicated", P2 is false. The word "only" is the reason.

Even with that word removed, the conclusion, is still false.

Complexity does not require intelligence. All it requires is a cause.

Kindest regards,

James
 
arg-fallbackName="Gnug215"/>
dandan said:
But don´t worry, since you seem to have many problems with “creationists” terms, for now un I will try to use “complexity” “or complicated” instead.
So the creationists arguent is:

Premise 1: life is complicated

Premise 2: Complicated stuff can only come from a mind

Therefore life came from a mind.

The argument is falsifiable, all you have to do is disprove any of those premises.


Seems that has already been done, but I'll give you another counter argument:

Premise 1: Minds only appear in physical, biological forms.

Premise 2: No mind can exist without life existing first.

Conclusion: Since no mind existed before life existed, therefore, God does (or did?) not exist.
 
arg-fallbackName="dandan"/>
DRAGAN
The point is, that in order to fulfil the definition, it must meet all the criteria - not just any or some.

A book does not meet all the criteria - therefore, it does not fulfil the definition.

The number of "letters" does not govern "complexity", as this article shows.

Ok, so why doesn’t a book has this attributes? Heterogeneity, non-random and "proficiency?
My point is that using Dawkins definition; a very long book with meaning, would be more complicated than the human genome, implying that it is theoretically possible to create something more complex than yourself.
Or let me put it this way;

Pretend that a modern computer is 1% as complex as a human.

Is it theoretically possible to create a computer that it 2% as complex as a human

Is it theoretically possible to create a computer that is 5% as complex as a human? What about 10% or 20% or 50% or 99.999%?
And what about 101% or 105% or 200%

You seem to be implying that there is a limit that would prevent humans to pass 99.99%, but you have not provided evidence for such limit.
Apart from the ambiguity of the use of the term "complicated", P2 is false. The word "only" is the reason.

Even with that word removed, the conclusion, is still false.

Complexity does not require intelligence. All it requires is a cause.

Kindest regards,

James

Every single example of a complicated thing that has been observed ALWAYS comes from a mind, you are making an arbitrary exception with “life”

However I made the positive argument, therefore I hold the burden proof. I am arguing that complicated things can only come from a mind ¿what evidence would you accept in support of that premise?
 
arg-fallbackName="Rumraket"/>
dandan said:
Every single example of a complicated thing that has been observed ALWAYS comes from a mind
No. We have seen complexity evolve, so that's wrong.
 
arg-fallbackName="Rumraket"/>
dandan said:
However I made the positive argument, therefore I hold the burden proof. I am arguing that complicated things can only come from a mind ¿what evidence would you accept in support of that premise?
How could you even demontrate that? You'd need to test every possible natural process there can be. Have you done that? Do you know of all processes that take place in the universe, will ever take place, or have ever taken place?
 
arg-fallbackName="dandan"/>
Rumraket said:
dandan said:
However I made the positive argument, therefore I hold the burden proof. I am arguing that complicated things can only come from a mind ¿what evidence would you accept in support of that premise?
How could you even demontrate that? You'd need to test every possible natural process there can be. Have you done that? Do you know of all processes that take place in the universe, will ever take place, or have ever taken place?

So by your logic we can´t be certain (at least at high degree) that for example matter/energy can´t be created nor destroyed, until we test every single atom in the universe…
 
Back
Top