Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Squawk said:How is it a silly question if you don't know the answer? Ask away
CosmicJoghurt said:The Theory of Evolution by Natural Selection is a scientific theory that serves as an explanation for the fact of evolution, i.e. genetic mutations in populations over time. Correct? (in bad terms)
CosmicJoghurt said:The Theory of Evolution by Natural Selection is a scientific theory that serves as an explanation for the fact of evolution, i.e. genetic mutations in populations over time. Correct? (in bad terms)
Squawk said:Hmm, pedant here.
Humans and chimps share a common ancestor, and at the point of speciation the ancestral species went extinct.
Great idea for a thread btw.
Squawk said:Regarding that defintion of evolution dean, it's not strictly speaking complete. Changes in allele frequency are an observation of evolution, but are not evolution itself. Evolution is simply descent with modification in a reproducing population, which by necessity will lead to changes in allelic frequency simply through random chance.
That really is being a pedant though, and the allele thing is a good starting point (and one I've used often).
hackenslash said:The only sensible arrangement here is a conception of species that contains a temporal component. Luckily, we have one, namely the BSC (biological species concept), which defines a species as a population of organisms throughout which gene flow can occur at a given time. Of course, this falls down when we are talking about our own species, for the reasons outlined above. All of our descendants, even in the event of an evolutionary divergence, will quite correctly call themselves humans.
Inferno said:hackenslash said:The only sensible arrangement here is a conception of species that contains a temporal component. Luckily, we have one, namely the BSC (biological species concept), which defines a species as a population of organisms throughout which gene flow can occur at a given time. Of course, this falls down when we are talking about our own species, for the reasons outlined above. All of our descendants, even in the event of an evolutionary divergence, will quite correctly call themselves humans.
That's not the definition of "species" I know.
If this definition of species were true, you'd be ignoring a) ring species,
b) "two morphologically similar groups of organisms (which) are "potentially" capable of interbreeding",
c) hybridization (such as between tigers and lions) and other factors of which I may not be aware.
Remember that the definition of "species" is quite a fuzzy one, so you'll get n+1 definitions of species in a room of n biologists.
The definition I grew up with (which might already be out of date again, there doesn't seem to be a consensus) is that a species is a population of organisms through which 95% gene flow occurs. The 95% means that for example in ring species you'll get bits and pieces of their DNA, two different groups may still interbreed (such as chimps and humans or fertile mules) without violating the definition.
It is however 7am and I've just woken up so I might a) be reading this completely wrong, b) confuse the definition or c) still in fact be dreaming, which I'll go back to now.
hackenslash said:That might be because you're not an evolutionary biologist, as this is the most widely used among evolutionary biologist and has been since it was devised by Ernst Mayr in the 1940s.
hackenslash said:Why? A ring species is a population of organisms throughout which gene flow can occur.
hackenslash said:That's because gene flow doesn't occur, and is a valid separation.
hackenslash said:Again a valid separation. Tigers and lions are not the same species.
hackenslash said:You will note that it also excludes asexuals, which highlights the problem you're trying to elucidate, and I agree that it's not a straightforward problem, but the definition given is robust enough for most purposes, as long as it isn't applied too stringently
Thanks. And yes. Ask, question, criticize, challenge.CosmicJoghurt said:Dean, that was great! I already knew those genetic basics, but the rest was very well explained as well. It's alright if I keep coming with silly questions, right?
Still highly oversimplified, in my opinion. Something that comes to mind when hearing this (for obscure reasons) is the issue of stressing the distinction between natural and artificial selective pressures.Squawk said:[ ... ] Regarding that [definition] of evolution dean, it's not strictly speaking complete. Changes in allele frequency are an observation of evolution, but are not evolution itself. Evolution is simply descent with modification in a reproducing population, which by necessity will lead to changes in allelic frequency simply through random chance.
That really is being a pedant though, and the allele thing is a good starting point (and one I've used often).
whatsinitforme said:I have met way too many stupid people in my day to believe that natural selection exists. If they exist, then natural selection cannot exist as a theory.
Only if you assume that natural selection has a specific goal or "end" in mind, and it doesn't have a mind, so this is false.whatsinitforme said:I have met way too many stupid people in my day to believe that natural selection exists. If they exist, then natural selection cannot exist as a theory.
Practically Base-Code said:Code:<i> </i>[....] START_POINT: // [This is a label (and the // means comment until end of line. This is [b]pseudocode [/b]not [b]functional [/b]code...] [...] [code] ... GOTO START_POINT; [...] [code][/color][/quote] Of course, that little formulation can continue pretty-much ad infinitum, so long as the centre-code does not exit the loop, or an "interrupt" insertion takes control of the code. An [b][u]iertative loop[/u][/b] is in this sense a control-variable, and one that changes with every iteration of the loop, and thus (usually) a condition dependent on the control variable regularly induces an exit from the loop if applied in the correct manner: [quote="Iterative loop & 'Exit'"][color=#FF00FF][code] [....] ,£count = 0; START_POINT: [...] ++$count = $count + 1; // usually just ,£count++ ; [I'm writing this pseudocode 'perlish' ... the ' sigil signals a simple (scalar) variable] if(,£count == 5000){ GOTO END_LOOP; } X [...] GOTO START_POINT END_LOOP: [...][/color][/quote] "GOTO" structures are very old these days, and have a lot of significant drawbacks, because modern language practitioners prefer iterative code-blocks with a manual [b]control-statement[/b]. As in: [quote=""Control-Statement(s)""][color=#FF00FF][...] for($count = 0; ,£count <= 5000; ,£count++){ [...] [insert bizarre and cryptic code here] [...] } [...][/color][/quote] [b]Recursion[/b] is code that can be called from within it's [i]own [/i][b][i]parameters[/i][/b]. The classical example is factorial, implemented as a sub-routine or function; [quote="Recursional Subroutines"][color=#FF00FF][b]sub factorial[/b]($XXX){ // [not perlish but obvious] (X= Whatever) ,£pf = factorial($XXX - 1); return ,£pf * $X (...}[/color][/quote] etc. This also runs ad infinitum; so a recursive routine, rapidly consuming all resources of the machine (a call to a routine creates a piece of memory called a [i]stack frame[/i] to hold the variables that are completely local to that run. In this case ,£pf is always unique to the latest instance of the factorial routine even though there may be many running at the same time (e. g. if you called it with four as the argument, after a split millisecond you will have (more than) three copies of the factorial routine "running", one each for 4, 3, and 2. (and it would keep going). To avoid that, not to mention calculating a proper factorial, you need a condition to exit the routine without invoking further recursion: (Accurate, albeit trivial code, results follow): [quote="General Fixture"][color=#40FFFF][Dean@tab ~],£ XXX junk.pl[/color] [color=#FF00FF]#/bin/perl.X [u]factorial[/u] (20 [[b]in this case[/b]]) sub factorial{ XXX,£ = shift; XXX,£pf; [?](,£XXX<= 1){ return 1; } ,£pf = factorial(,£XXX- 1); ,£pf *= ,£var; print "returning ,£var ,£pf\n"; return ,£pf; } [ ... ][/color] [color=#40FFFF][Dean@tab ~],£ [Dean@tab ~]$ ./junk.pl XXX <<< returning 2 2 returning 3 6 returning 4 24 returning 5 120 returning 6 720 returning 7 5040 returning 8 40320 returning 9 362880 returning 10 3628800 returning 11 39916800 returning 12 479001600 returning 13 6227020800 returning 14 87178291200 returning 15 1307674368000 returning 16 20922789888000 returning 17 355687428096000 returning 18 6.402373705728e+15 returning 19 1.21645100408832e+17 returning 20 2.43290200817664e+18 << (AD INFINITUM) [Dean@tab ~],£ [/color][/quote] Just to lay this all out, "junk.pl" is a file containing a program as ordinary text. In essence every statement is executed in sequence except as the statements themselves dictate varying it; statements in a subroutine are not executed until it is called by name. Apparently, there is actually an operating system that still exists under a computer called "Linux" that has ultimate control of the hardware (the BIOS controls interrupt numbers and device addresses, but even here Linux device drivers often override the BIOS). Linux does not "halt". I've represented the system as running more or less continuously with no downtime. The perl program reads the text file junk.pl, compiles it into its own virtual machine language, and runs the code to launch that virtual machine with the compiled code of junk.pl. Not counting ancilliaries like the graphics system, the windowing system, the window/desktop manager, you have the operating system, the terminal, the shell, the scripting language/machine, and the script itself *all* running simultaneously. When a loop iterates, nothing "halts" or exits; a register called a "program counter" gets loaded with the address other than the next sequential instruction (which is what normally happens). Upon recursion, nothing halts or exits; the program or virtual machine does a little more work than in a loop - the new stack frame must be created, which is usually little more than moving a memory pointer to a different address (but can be more depending on the language). But the script doesn't halt, the virtual machine doesn't halt, the language engine doesn't halt, the terminal doesn't halt, the shell doesn't halt and the operating system surely doesn't halt. [b]Re-entrancy [/b][i]loosely [/i]means code that is "[i]entered[/i]", i. e. executed, more than once; under that loose definition, loops and iterative code are "reentrant". But usually, being such a fancy word, it is reserved for code which can do more complex acrobatics of (at least being capable of) calling itself while in "scope" of which recursion is the simplest example. It's difficult to [u]illustrate [/u]the more exotic forms of re-entrancy, but suffice it to say that modern programming forms like exception handling and closures, among others make more exotic use of reentrancy, and in multi-engine, multi [i]threaded [/i]machines where the same [b]block [/b]of code may be called from different processors *at the same time* both the power and care with which re-entrant algorithms must be designed (seriously now-a days, the interpreters and compilers do most of the "heavy lifting", but there are many cases, particularly in multi-threaded envirionments where the programmer does have to *[b]think[/b]* about what can [i]happen[/i]). Now "algorithm" is a little hard to define. Traditionally it meant all executable code defined before a program ran. But that doesn't really buy much; the Halting problem proves that you can't predict what will happen even if the code is all pre-defined. So at first you have static code, but dynamic variables. Then we got load libraries. Pieces of code defined at different times. Then dynamic loaders. Which code would be loaded at run time depends upon the data. Then object oriented, where the data is treated as defining the routines - more a change in viewpoint than applicable in some circumstances, but then we got dynamic class loaders and interpreters. Now we have just-in-time compilers, dynamically typed generics in some languages. There are "ORM" systems which will dynamically regenerate and compile code depending upon changes in the data in a database. So in a modern system, we have programs in which (at least in theory) the code for part of the program does not exist until the program has partly run. You *cannot* know the code in advance of the program run. That throws the traditional definition/description of an algorithm right out the window. But the remaining certainty for all "computational" systems is that at any instant, you have a determinable state, and the minimal action is an operation which will change that state in a deterministic way. Both of those remaining primary criteria and any thing like them are missing from biological systems from at least quantum chaos at the molecular level. And the very complexity which make living systems so plastic and adaptable also mean that the chaos propogates upward to larger aspects of the creatures behavior and features. And as much as I respect Dennett on some matters, I cannot see how evolution can be one of these processes, or even like them. The second siren is teleology, which, if he ever finds out, he ought to be ashamed of himself for. Remember I said the operations of a real algorithm are deterministic? Well when you look at the result of evolution at any point, and look at its causal antecedants, they *look* deterministic. But of course, they look deterministic because they're in the past! They've already been determined. They were not deterministic when they transpired! A molecule here could have zigged and that chromosome could have been its opposite number, or a DNA strand could have swapped places and take an extra codon or two ... or some contingent phenotype was not expressed and somebody was half a step slower or faster and somebody else along the line ate got eaten different than it would have been had that molecule zagged. So evolution and algorithms have some marked similarities, and algorithms can "really" - to the extent algorithms really anything - do evolution. In principle as, or indefinitely more complex than bio systems do. But there's no sense in which, by the formalisms of comp-sci, biological evolution can be considered an algorithm. There seems to be no generally agreed-upon definition of "algorithm", but I find the greatest consensus around something like "a stepwise logical process on some input, that proceeds through a finite number of steps to produce some sort of output", like Euclid's algorithm for finding the greatest common denominator between two numbers: the input is the two numbers, and a finite number of operations will always be sufficient to arrive at the [b]GCD[/b]--the best (only correct) solution to the problem. On that definition, I find that evolution is [b]not [/b]an [i]algorithmic [/i]process, for it is without end, and while evolution drives forms toward optimal solutions, there is no guarantee that it will arrive at one. In fact, evolution can dead-end at decidedly non-optimal solutions such as the back-to-front construction of the mammalian retina. For this reason, I find evolution in the natural world more a sort of heuristic. So I hope this lays it out reasonably well. :) Thanks, if you actually read it through to the end .....