• Welcome to League Of Reason Forums! Please read the rules before posting.
    If you are willing and able please consider making a donation to help with site overheads.
    Donations can be made via here

The Elshamah mega-thread

arg-fallbackName="Elshamah"/>
Re: Proteintransport into Mitochondria is irreducible comple

]All cellular functions are  irreducibly complex

http://reasonandscience.heavenforum.org/t2179-all-cellular-functions-are-irreducibly-complex

Prokaryotes are thought to differ from eukaryotes in that they lack membrane-bounded organelles. However, it has been demonstrated that there are bacterias which have membrane bound organelles named acidocalcisomes, and that V-H+PPase proton pumps are present in their surrounding membranes. Acidocalcisomes have been found in organisms as diverse as bacteria and humans. Volutin granules which are equivalent of acidocalcisomes also occur in Archaea and are, therefore, present in the three superkingdoms of life (Archaea, Bacteria and Eukarya). These volutin granule organelles occur in organisms spanning an enormous range of phylogenetic complexity from Bacteria and Archaea to unicellular eukaryotes to algae to plants to insects to humans. According to neo-darwinian thinking, the universal distribution of the V-H+PPase  domain  suggests the domain and the enzyme were already present in the Last Universal Common Ancestor (LUCA).

http://reasonandscience.heavenforum.org/t2176-lucathe-last-universal-common-ancestor#3992

If the proton pumps of Volutin granules were present in LUCA, they had to emerge prior to self replication, which induces serious constraints to propose evolution as driving factor. But if evolution was not the mechanism, what else was ? There is not much left, namely chance, random chemical reactions, or physical necessity.

But lets for a instance accept the "fact of evolution", and suppose as the driving force to make  V-H+PPase proton pumps.  In some period prior to the verge of non-life to life, natural selection or an other evolutionary mechanism would have had to start polymerisation of the right amino acid sequence to produce V-H+PPase proton pumps by addition of one amino acid monomer to the other. First, the whole extraordinarly  production line of staggering complexity starting with DNA would have to be in place, that is  :

The cell sends activator proteins to the site of the gene that needs to be switched on, which then jump-starts the RNA polymerase machine by removing a plug which blocks the DNA's entrance to the machine.  The DNA strands do shift position so that the DNA lines up with the entrance to the RNA polymerase. Once these two movements have occurred and the DNA strands are in position, the RNA polymerase machine gets to work melting them out, so that the information they contain can be processed to produce mRNA 2 The process follows then after INITIATION OF TRANSCRIPTION through RNA polymerase enzyme complexes, the mRNA is  capped through Post-transcriptional modifications by several different enzymes ,  ELONGATION provides the main transcription process from DNA to mRNA, furthermore  SPLICING and CLEAVAGE ,  polyadenylation where a long string of repeated adenosine nucleotides is added,  AND TERMINATION through over a dozen different enzymes,    EXPORT FROM THE NUCLEUS TO THE CYTOSOL ( must be actively transported through the Nuclear Pore Complex channel in a controlled process that is selective and energy dependent  )  INITIATION OF PROTEIN SYNTHESIS (TRANSLATION) in the Ribosome in a enormously complex process,  COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING through chaperone enzymes. From there the proteins are transported by specialized proteins to the end destination. Most of these processes require ATP, the energy fuel inside the cell.  

http://reasonandscience.heavenforum.org/t2067-there-is-no-selective-advantage-until-you-get-the-final-function?highlight=function

The genetic code to make the right ~600 amino acid sequence would have to be made by mutation and natural selection. But mutation of what, if there was no functional protein yet ? . The problem in this stage is,  when there is no selective advantage until you get the final function, the final function doesn't evolve. In other words, a chain of around 800 amino acids is required to make a funcional V-H+PPase proton pump, but there is no function, until polymerisation of all 600 monomers is completed and the right sequence achieved.

The problem for those who accept the truth of evolution is,  they cannot accept the idea that any biological structure with a beneficial function, however complex, is very far removed from the next closest functional system or subsystem within the potential of "sequence space" that might be beneficial if it were ever found by random mutations of any kind. In our case the situation is even more drastic, since DENOVO genetic sequence and subsequently amino acid chain for a new formation of a new amino acids strand is required.  A further constraint is the fact that 100% of  amino acids used and needed for life are left handed, while DNA and RNA requires D-sugars.  Until today, science has not sorted out how nature is able to select the right chiral handedness. The problem is that the pre-biotic soup is believed to be a warm soup consisting of racemic mixtures of amino acid enantiomers (and sugars). How did this homogenous phase separate into chirally pure components? How did an asymmetry (assumed to be small to start with) arise in the population of both enantiomers? How did the preference of one chiral form over the other, propagate so that all living systems are made of 100 percent optically pure components?

What is sequence space ?
Imagine 20 amino acids mixed up  in a pool, randomly mixed , one adjacent to the other. The  pool with all the random amino acids  is the sequence space. This space can be two dimentional, tridimensional, or multidimensional. In evolutionary biology, sequence space is a way of representing all possible sequences (for a protein, gene or genome).  Most sequences in sequence space have no function, leaving relatively small regions that are populated by naturally occurring genes. Each protein sequence is adjacent to all other sequences that can be reached through a single mutation. Evolution can be visualised as the process of sampling nearby sequences in sequence space and moving to any with improved fitness over the current one.

Functional sequences in sequence space
Despite the diversity of protein superfamilies, sequence space is extremely sparsely populated by functional proteins. That is, amongst all the possible amino acid sequences, only a few permit the make of functional proteins. Most random protein sequences have no fold or function. To exemplify:  In order to write METHINKS IT IS LIKE A WEASEL , there are 10^40 possible random combinations possible to get the right sequence. But only one is correct.

Enzyme superfamilies, therefore, exist as tiny clusters of active proteins in a vast empty space of non-functional sequence.The density of functional proteins in sequence space, and the proximity of different functions to one another is a key determinant in understanding evolvability.
Protein sequence space has been compared to the Library of Babel a theoretical library containing all possible books that are 410 pages long. In the Library of Babel, finding any book that made sense was impossible due to the sheer number and lack of order. 


How would a bacterium evolve a function like a single protein enzyme? - like a V-H+PPase proton pump? The requirement is about 600  specified residues at minimum.  A useful V-H+PPase cannot be made with significantly lower minimum size and specificity requirements.   These minimum requirements create a kind of threshold beyond which the V-H+PPase function simply cannot be built up gradually where very small one or two residues changes at a time result in a useful change in the degree of the proton pump function. Therefore, such functions cannot have evolved in a gradual, step by step manner.  There simply is no template or gradual  pathway from just any starting point to the minimum threshold requirement.  Only after this threshold has been reached can evolution take over and make further refinements - but not until. Now, there are in fact examples of computer evolution that attempt to address this problem;

All Functions are "Irreducibly Complex" 

The fact is that all cellular functions are irreducibly complex in that all of them require a minimum number of parts in a particular order or orientation.  I go beyond what Behe proposes and make the suggestion that even single-protein enzymes are irreducibly complex.  A minimum number of parts in the form of amino acid residues are required for them to have their particular functions.  The proton pump function cannot be realized in even the smallest degree with a string of only 5 or 10 or even 500 residues of any arrangement.  Also, not only is a minimum number of parts required for the proton pump function to be realized, but the parts themselves, once they are available in the proper number, must be assembled in the proper order and three-dimensional orientation.  Brought together randomly, the residues, if left to themselves, do not know how to self-assemble themselves to form a much of anything as far as a functional system that even comes close to the level of complexity of a even a relatively simple function like a proton pump.  And yet, their specified assembly and ultimate order is vital to function.
Of course, such relatively simply systems, though truly irreducibly complex, have evolved.  This is because the sequence space at such relatively low levels of functional complexity is fairly dense.  It is fairly easy to come across new beneficial sequences  if the density of potentially beneficial sequences in sequence space is relatively high.  This density does in fact get higher and higher at lower and lower levels of functional complexity - in an exponential manner.  

It is much like moving between 3-letter words in the English language system.  Since the ratio of meaningful vs. meaningless 3-letter words in the English language is somewhere around 1:18, one can randomly find a new meaningful and even beneficial 3-letter word via single random letter changes/mutations in relatively short order.  This is not true for those ideas/functions/meanings that require more and more letters.  For example, the ratio of meaningful vs. meaningless 7-letter words and combinations of smaller words equaling 7-letters is far far lower at about 1 in 250,000.  It is therefore just a bit harder to evolve between 7-letter words, one mutation at a time, than it was to evolve between 3-letter words owing to the exponential decline in the ratio of meaningful vs. meaningless sequences.  

The same thing is true for the evolution of codes, information systems, and systems of function in living things as it is for non-living things (i.e., computer systems etc).  The parts of these codes and systems of function, if brought together randomly, simply do not have enough meaningful information to do much of anything. So, how are they brought together in living things to form such high level functional order?
 
arg-fallbackName="Dragan Glas"/>
Re: Proteintransport into Mitochondria is irreducible comple

Greetings,

As usual, copying/pasting what others are saying as if it's your own thoughts. And your sources for these quotes are not exactly scientists, are they?

Modern cells are irreducibly complex - so what? How does this disprove (the theory of) evolution?

We start out as a fertilised egg - a single cell. From that, we develop to a multicellular organism with vital organs, of which the removal of any results in death. Each of us is a irreducibly complex system. Yet we didn't start out with every single vital part in its place, simultaneously - they developed at different stages.

So much for your irreducibly complex systems requiring that every single part be present simultaneously.

Kindly stop wasting everyone's time with plagiarising others ideas as if they're your own.

Like everyone else, I'm waiting for you to actually address Rumraket's points - rather than copy/paste more word-salads at everyone.

Kindest regards,

James
 
arg-fallbackName="Elshamah"/>
Re: Proteintransport into Mitochondria is irreducible comple

Prior to the origin of the first living cell, all proteins had to be synthesized de novo, that is, from zero. In order to do that however, all the machinery to make proteins had to be in place. To propose that ribozymes would have done the job, without template, without coded information is far fetched. Beside this, the machinery itself that makes proteins,is made of proteins. Thats a catch22 situation. The cell furthermore would have a) know how to select the right left handed amino acids in a mixed pool of amino acids, b) select amongst inumerous amino acids, just the amongst the 20 required for life, and then select each one correctly, and bond one to the other in the right sequence. A protein chain cannot evolve from zero without the machinery in place, and the right information. Period. As the proton pump for example. First, it emerged prior to replication, which cancels evolution as a possible mechanism. Secondly, there is no function until the protein chain is fully formed with at least 600 amino acid residues linked each one correctly to another, all L amino acids selected, and the protein folded correctly. Trial and error will simply NEVER provide you that result. Thats impossible. This is true for one protein. Not to speak for the thousands in the whole immensly complex cell. Take lottery balls with 20 different colors, amongst the colors black. nuber them all from one to 600. But on 600 balls, you write left, and on another 600, you write right. Total 24000 balls. Now play lottery , and see how many trials will get you a chain of aligned numbers, from 1 to 600, only with balls written left on them and black. Or put it another way. let us consider a simple protein containing 600 amino acids. There are 20 different kinds of L-amino acids in proteins, and each can be used repeatedly in chains of 600. Therefore, they could be arranged in 20^600 different ways....... Would you bet a dime on such odds ?
 
arg-fallbackName="Mr_Wilford"/>
Re: Proteintransport into Mitochondria is irreducible comple

Irreducibly complex systems have been observed to come about by evolution, as pointed out by Rumraket.

This has been shown to be the case

Therefore it's not a problem for evolution.

Can we stop with the copy and pasting of the same flawed argument?
 
arg-fallbackName="DutchLiam84"/>
Re: Proteintransport into Mitochondria is irreducible comple

It's just spam...plagiarized spam.
 
arg-fallbackName="Rumraket"/>
Re: Proteintransport into Mitochondria is irreducible comple

Elsamah's copy-paste said:
What is sequence space ?
Imagine 20 amino acids mixed up in a pool, randomly mixed , one adjacent to the other. The pool with all the random amino acids is the sequence space. This space can be two dimentional, tridimensional, or multidimensional. In evolutionary biology, sequence space is a way of representing all possible sequences (for a protein, gene or genome). Most sequences in sequence space have no function, leaving relatively small regions that are populated by naturally occurring genes. Each protein sequence is adjacent to all other sequences that can be reached through a single mutation. Evolution can be visualised as the process of sampling nearby sequences in sequence space and moving to any with improved fitness over the current one.

Functional sequences in sequence space
Despite the diversity of protein superfamilies, sequence space is extremely sparsely populated by functional proteins. That is, amongst all the possible amino acid sequences, only a few permit the make of functional proteins. Most random protein sequences have no fold or function.
It is trivial and easy to find functiona proteins in random sequence space. The Szostak lab proved this experimentally back in the late 90's and early 2000's and showed it to be true both for proteins and RNA's:
http://molbio.mgh.harvard.edu/szostakweb/publications/Szostak_pdfs/Keefe_Szostak_Nature_01.pdf
Functional proteins from a random-sequence library
Anthony D. Keefe & Jack W. Szostak
Functional primordial proteins presumably originated from random sequences, but it is not known how frequently functional, or even folded, proteins occur in collections of random sequences. Here we have used in vitro selection of messenger RNA displayed proteins, in which each protein is covalently linked through its carboxy terminus to the 39 end of its encoding mRNA1 , to sample a large number of distinct random sequences. Starting from a library of 6 x 10[sup]12[/sup] proteins each containing 80 contiguous random amino acids, we selected functional proteins by enriching for those that bind to ATP. This selection yielded four new ATPbinding proteins that appear to be unrelated to each other or to anything found in the current databases of biological proteins. The frequency of occurrence of functional proteins in random sequence libraries appears to be similar to that observed for equivalent RNA libraries2,3.
80 random amino acids strung together into a protein. Generate 6x10[sup]12[/sup] different, random copies, test them all for a single (and extremely biologically important) function: Bind ATP.

Among that starting pool of random proteins 80 amino acids in length, there were four (4) different, unrelated proteins found that could do it. That gives about 1 in every 10[sup]11[/sup] proteins capable of binding ATP. Which strongly indicates that as an absolute minimum there is at least one biologically relevant function in every 10[sup]11[/sup] 80-amino-acid long proteins. (I could stop here already, this is enough to render all of creationism bunk).

Notice how only a single function was tested for for that pool of random proteins. They could have tested millions of different functions (bind other biologically important molecules, tested for catalysis of thousands of different chemical reaction, stabilize phospholipid membranes etc. etc.) - but they only tested for one and found it already to begin with.

The simple fact is that Sean Pitman, the creationist liar for doctrine from which you probably copy-pasted this religiously motivated drivel, is talking out of his religiously biased ass, based on a couple of studies where he wildly extrapolates the results into areas the data don't support. For example, the Discovery Institute paid their liar propaganda laboratory to mutate a functional protein until it stopped working (at what it was doing), then they tried to derive a general rule for the rarity of function in protein sequence space on this stupid experiment. It's true, it only required relatively few mutations to destroy the function of the protein in question, and as a result they computed that functional proteins are supposed to exist at a rate of approximately 1 in every 10[sup]77[/sup] proteins. Which if true, would entail that functional proteins were, as you go on to copy-paste, exceptionally rare. But does their experiment really warrant that kind of conclusion? They mutated a protein until it stopped working (again, at what it was doing). Even then, that is still not any guarantee that the protein in question is entirely nonfunctional. It is entirely possible that you can mutate a specific protein fold that, say, catalyzes some chemical reaction until it stops catalyzing that chemical reaction. But who's to say that protein can't do something else now? It might be able to catalyze a different but related chemical reaction now. You actually have to test for that, you can't just declare it nonfunctional and then extrapolate from a test of your single fold into every function for every protein in every environment ever. Obviously.

But we know, we already know that it isn't true. The Szostak lab experiments proved it directly. They picked a single arbitrary and biologically important function, generated random protein sequences and found the function already in the very first pool of random proteins. Furthermore, these proteins could be significantly improved with sequential rounds of selection, both so their binding affinity improved and so they could reliably discriminate between similar substrates (the protein ended up being able to bind ATP but not ADP or AMP). Which means the function was not an isolated lucky spike in protein sequence space, it was sitting in a sea related functions.
Elsamah's copy-paste said:
Enzyme superfamilies, therefore, exist as tiny clusters of active proteins in a vast empty space of non-functional sequence.
This is another flat out lie. Probably from Sean Pitman.

http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002403
Exploring the Evolution of Novel Enzyme Functions within Structurally Defined Protein Superfamilies
Nicholas Furnham mail, Ian Sillitoe, Gemma L. Holliday, Alison L. Cuff, Roman A. Laskowski, Christine A. Orengo, Janet M. Thornton
Abstract

In order to understand the evolution of enzyme reactions and to gain an overview of biological catalysis we have combined sequence and structural data to generate phylogenetic trees in an analysis of 276 structurally defined enzyme superfamilies, and used these to study how enzyme functions have evolved. We describe in detail the analysis of two superfamilies to illustrate different paradigms of enzyme evolution. Gathering together data from all the superfamilies supports and develops the observation that they have all evolved to act on a diverse set of substrates, whilst the evolution of new chemistry is much less common. Despite that, by bringing together so much data, we can provide a comprehensive overview of the most common and rare types of changes in function. Our analysis demonstrates on a larger scale than previously studied, that modifications in overall chemistry still occur, with all possible changes at the primary level of the Enzyme Commission (E.C.) classification observed to a greater or lesser extent. The phylogenetic trees map out the evolutionary route taken within a superfamily, as well as all the possible changes within a superfamily. This has been used to generate a matrix of observed exchanges from one enzyme function to another, revealing the scale and nature of enzyme evolution and that some types of exchanges between and within E.C. classes are more prevalent than others. Surprisingly a large proportion (71%) of all known enzyme functions are performed by this relatively small set of 276 superfamilies. This reinforces the hypothesis that relatively few ancient enzymatic domain superfamilies were progenitors for most of the chemistry required for life.
,
Did you catch that last thing in bold? 71% of all known enzyme functions(which, if you read the paper, are in the several tens of thousands) are performed by a set of 276 superfamilies.

Moving on:
A significant proportion of the reactions required for life are performed by a relatively small number of superfamilies so it can be postulated that a few ancient enzymatic domain superfamilies were progenitors for most of the chemistry required for life, this considerably develops previous observations [37]. Using the phylogenetic trees to define the evolutionary route taken within a superfamily to change function, we were able to generate the E.C. change matrix. The large numbers of changes at the E.C. 4th level in the summary of E.C. changes in phylogentic trees compared to the low number of E.C. class changes indicates that changes in specificity occur mostly at the leaves of the trees, while more fundamental changes in chemistry occur at the root of the tree. Further work is required to ascertain when in evolution these changes occurred. Therefore a large amount of enzyme diversity occurs through evolution rather than de novo invention. Although, of course, new enzymes must have evolved at some stage, probably very early in the evolution of life. To identify the small number of ‘original’ enzyme progenitors requires more work and more experimental data.
That means most of the major functional folds found in the majority of extant enzymes, reduce to a set of 276 proteins and possibly less, from which they all ultimately evolved.

This is the tree of life, but for enzymes instead of species. Almost universal common descent for functional enzymes.

Okay, but.. how does this even happen then. How do these enzymes change so much through evolution?
Well, new studies have shed some light on that too and it's mostly by gene-duplication:
http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1001446
Reconstruction of Ancestral Metabolic Enzymes Reveals Molecular Mechanisms Underlying Evolutionary Innovation through Gene Duplication
Karin Voordeckers equal contributor, Chris A. Brown equal contributor, Kevin Vanneste, Elisa van der Zande, Arnout Voet, Steven Maere mail, Kevin J. Verstrepen
Abstract

Gene duplications are believed to facilitate evolutionary innovation. However, the mechanisms shaping the fate of duplicated genes remain heavily debated because the molecular processes and evolutionary forces involved are difficult to reconstruct. Here, we study a large family of fungal glucosidase genes that underwent several duplication events. We reconstruct all key ancestral enzymes and show that the very first preduplication enzyme was primarily active on maltose-like substrates, with trace activity for isomaltose-like sugars. Structural analysis and activity measurements on resurrected and present-day enzymes suggest that both activities cannot be fully optimized in a single enzyme. However, gene duplications repeatedly spawned daughter genes in which mutations optimized either isomaltase or maltase activity. Interestingly, similar shifts in enzyme activity were reached multiple times via different evolutionary routes. Together, our results provide a detailed picture of the molecular mechanisms that drove divergence of these duplicated enzymes and show that whereas the classic models of dosage, sub-, and neofunctionalization are helpful to conceptualize the implications of gene duplication, the three mechanisms co-occur and intertwine.
It turns out the oldest reconstructed proteins are functionally promiscous. That means they catalyze many different reactions(by accepting many different substrates) at the same time, though at sub-optimal reaction-rates compared to their faster and later evolved descendants:
Author Summary

Darwin's theory of evolution is one of gradual change, yet evolution sometimes takes remarkable leaps. Such evolutionary innovations are often linked to gene duplication through one of three basic scenarios: an extra copy can increase protein levels, different ancestral subfunctions can be split over the copies and evolve distinct regulation, or one of the duplicates can develop a novel function. Although there are numerous examples for all these trajectories, the underlying molecular mechanisms remain obscure, mostly because the preduplication genes and proteins no longer exist. Here, we study a family of fungal metabolic enzymes that hydrolyze disaccharides, and that all originated from the same ancestral gene through repeated duplications. By resurrecting the ancient genes and proteins using high-confidence predictions from many fungal genome sequences available, we show that the very first preduplication enzyme was promiscuous, preferring maltose-like substrates but also showing trace activity towards isomaltose-like sugars. After duplication, specific mutations near the active site of one copy optimized the minor activity at the expense of the major ancestral activity, while the other copy further specialized in maltose and lost the minor activity. Together, our results reveal how the three basic trajectories for gene duplicates cannot be separated easily, but instead intertwine into a complex evolutionary path that leads to innovation.

A great figure that shows this:
fetchObject.action

Figure 2. Duplication events and changes in specificity and activity in evolution of S. cerevisiae MalS enzymes.
The hydrolytic activity of all seven present-day alleles of Mal and Ima enzymes as well as key ancestral (anc) versions of these enzymes was measured for different α-glucosides. The width of the colored bands corresponds to kcat/Km of the enzyme for a specific substrate. Specific values can be found in Table S2. Note that in the case of present-day Ima5, we were not able to obtain active purified protein. Here, the width of the colored (open) bands represents relative enzyme activity in crude extracts derived from a yeast strain overexpressing IMA5 compared to an ima5 deletion mutant. While these values are a proxy for the relative activity of Ima5 towards each substrate, they can therefore not be directly compared to the other parts of the figure. For ancMalS and ancMal-Ima, activity is shown for the variant with the highest confidence (279G for ancMalS and 279A for ancMal-Ima). Activity for all variants can be found in Table S2.
doi:10.1371/journal.pbio.1001446.g002
As you can see, the original ancestral enzyme has low substrate specificity and is functionally promiscous(it catalyzes reactions from all the different substrates(colors), but at a low reaction rate(the thickness of the bars). Subsequently it gets duplicated, and daughter enzymes acquire novel mutations that change the substrate specificity, vastly increasing the reaction-rates for a smaller subset of substrates, sometimes losing functionality entirely for specific substrates.

Just to see how different duplicated proteins can get from their ancestral versions(while retaining function without problem), check out the supplementary materials for this study on the evolution of Archaea: http://www.biology-direct.com/content/8/1/9
Insights into archaeal evolution and symbiosis from the genomes of a nanoarchaeon and its inferred crenarchaeal host from Obsidian Pool, Yellowstone National Park
Mircea Podar1,2*, Kira S Makarova3, David E Graham1,2, Yuri I Wolf3, Eugene V Koonin3 and Anna-Louise Reysenbach4

This shows the phylogentic tree which results from alignments of the Archaeal protein FlaH, which is involved in regulation of Archaeal flagellum synthesis. The data we need are the protein names. Pick a protein on the list, like "229583692 Sulfolobus islandicusM 16 27 uid58851" and do a pubmed search for the protein sequence, this gives us the following proteins sequence:
http://www.ncbi.nlm.nih.gov/protein/229583692
ORIGIN
  • 1 megctviikt gnedldrrls gipfpalimi egdhgtgksv lsaqfcygll iggkkgyvit
    61 teqtskdylk kmkdvkinli pfflkgvlgi aplntnrfnw nstlankile viidfikkrk
    121 nmnfviidsl sivatfaeik qilqfmkdar vlvdlgklil ftvhpdvfne elksritsiv
    181 dvyfklsats iggrrikvle riktiggiqg adaisfdidp algvkvvpls lsra
//

Then pick a distantly related one on the other end of the tree, "222480972 Halorubrum lacusprofundi ATCC 49239 uid58807"

ORIGIN
  • 1 mphdnllslg lgerdrlnke lgggiprgsi vlmegdygag ksaisqrfay glveegasvt
    61 vmsteltvrg fidqmhsley dmvkpllqee llflhadfds ggafsdddge rkellkrlmn
    121 aeamwnsdvi fldtfdaifr ndptfealvr kneerqaale iisffreiis qgkvvvltvd
    181 psavdddaig pfrsiadvfl qlemievgnd irrqinvkrf agmgeqvgdt igfsvrsgtg
    241 iviesrsva
//

Producing alignment with these two protein sequences using http://blast.ncbi.nlm.nih.gov/Blast.cgi gets us this result:
file.php

Which shows that these two arbitrarily picked proteins (FlaH, a flagellum related regulatory element) that performs the same function in two distantly related Archaea, differ in their amino acid sequence by as much as 77%. Almost the entire protein has changed, yet it still works just as well regulating Archaea flagellar biosynthesis.

In conclusion, everything you copy-paste from your creationist liar sources is wrong. Demonstrably wrong. And I can sit here and show it directly from the primary literature and explain what the sources say in my own words.
 
arg-fallbackName="Elshamah"/>
Re: Proteintransport into Mitochondria is irreducible comple

voyage dans la mayonnaise much, Rumraket ??

your whole post is bunk.....

Prior to the origin of the first living cell, all proteins had to be synthesized de novo, that is, from zero. In order to do that however, all the machinery to make proteins had to be in place. To propose that ribozymes would have done the job, without template, without coded information is far fetched. Beside this, the machinery itself that makes proteins,is made of proteins. Thats a catch22 situation. The cell furthermore would have a) know how to select the right left handed amino acids in a mixed pool of amino acids, b) select amongst inumerous amino acids, just the amongst the 20 required for life, and then select each one correctly, and bond one to the other in the right sequence. A protein chain cannot evolve from zero without the machinery in place, and the right information. Period. As the proton pump for example. First, it emerged prior to replication, which cancels evolution as a possible mechanism. Secondly, there is no function until the protein chain is fully formed with at least 600 amino acid residues linked each one correctly to another, all L amino acids selected, and the protein folded correctly. Trial and error will simply NEVER provide you that result. Thats impossible. This is true for one protein. Not to speak for the thousands in the whole immensly complex cell. Take lottery balls with 20 different colors, amongst the colors black. nuber them all from one to 600. But on 600 balls, you write left, and on another 600, you write right. Total 24000 balls. Now play lottery , and see how many trials will get you a chain of aligned numbers, from 1 to 600, only with balls written left on them and black. Or put it another way. let us consider a simple protein containing 600 amino acids. There are 20 different kinds of L-amino acids in proteins, and each can be used repeatedly in chains of 600. Therefore, they could be arranged in 20^600 different ways....... Would you bet a dime on such odds ?
 
arg-fallbackName="Rumraket"/>
Re: Proteintransport into Mitochondria is irreducible comple

Elshamah said:
Major metabolic pathways and their inadequacy for origin of life proposals

[...]

This made the leading Origin of Life researcher Leslie Orgel say following:

The Implausibility of Metabolic Cycles on the Prebiotic Earth
Leslie E Orgel†

http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.0060018

Almost all proposals of hypothetical metabolic cycles have recognized that each of the steps involved must occur rapidly enough for the cycle to be useful in the time available for its operation. It is always assumed that this condition is met, but in no case have persuasive supporting arguments been presented. Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of the reverse citric acid cycle was present anywhere on the primitive Earth, or that the cycle mysteriously organized itself topographically on a metal sulfide surface? The lack of a supporting background in chemistry is even more evident in proposals that metabolic cycles can evolve to “life-like” complexity. The most serious challenge to proponents of metabolic cycle theories—the problems presented by the lack of specificity of most nonenzymatic catalysts—has, in general, not been appreciated. If it has, it has been ignored. Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own.
Orgel's paper is about the plausibility of catalytic cycles running on mineral surfaces, specifically the reductive TCA cycle. His work has been superceded:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4247476/
An Origin-of-Life Reactor to Simulate Alkaline Hydrothermal Vents
Barry Herschy, Alexandra Whicher, Eloi Camprubi, Cameron Watson, Lewis Dartnell, John Ward, Julian R. G. Evans, and Nick Lanecorresponding author
[...]

Despite these stark disparities, some clues do exist to early biochemistry. Strikingly, there are only six known pathways of carbon fixation across all life (Fuchs 2011), but just one of these, the acetyl CoA pathway, is found in both archaea (methanogens) and bacteria (acetogens), albeit with some striking differences between them (Maden 2000; Martin 2012; Sousa and Martin 2014). Neglecting these differences, several other factors testify to the antiquity of the acetyl CoA pathway. It is the only exergonic pathway of carbon fixation, drawing on just H2 and CO2 as substrates to drive both carbon and energy metabolism (Fuchs and Stupperich 1985; Ragsdale and Pierce 2008; Ljungdahl 2009); what Everett Shock has called “a free lunch you’re paid to eat” (Shock et al. 1998). It is short and linear, with just a few steps leading from H2 and CO2 to acetyl CoA and pyruvate, the gateway to intermediary metabolism (Fuchs 2011; Morowitz et al. 2000), thereby avoiding the problem of sequentially declining yields with non-enzymic cycles that might have precluded an abiotic reductive TCA cycle (Orgel 2008).

Buddy, you're fucked. All your arguments are based on lies, denial, cherry-picked quotemines from cherry-picked papers and personal ignorance.

Also, Orgel is not actually correct when he writes that "The most serious challenge to proponents of metabolic cycle theories—the problems presented by the lack of specificity of most nonenzymatic catalysts—has, in general, not been appreciated. If it has, it has been ignored." - papers that predate his own that do not at all ignore the nonspecificity of nonezymatic catalysis, actually exist. See:
http://www.ncbi.nlm.nih.gov/pubmed/17255002
On the origin of biochemistry at an alkaline hydrothermal vent.
Martin W1, Russell MJ.
Abstract
A model for the origin of biochemistry at an alkaline hydrothermal vent has been developed that focuses on the acetyl-CoA (Wood-Ljungdahl) pathway of CO2 fixation and central intermediary metabolism leading to the synthesis of the constituents of purines and pyrimidines. The idea that acetogenesis and methanogenesis were the ancestral forms of energy metabolism among the first free-living eubacteria and archaebacteria, respectively, stands in the foreground. The synthesis of formyl pterins, which are essential intermediates of the Wood-Ljungdahl pathway and purine biosynthesis, is found to confront early metabolic systems with steep bioenergetic demands that would appear to link some, but not all, steps of CO2 reduction to geochemical processes in or on the Earth's crust. Inorganically catalysed prebiotic analogues of the core biochemical reactions involved in pterin-dependent methyl synthesis of the modern acetyl-CoA pathway are considered. The following compounds appear as probable candidates for central involvement in prebiotic chemistry: metal sulphides, formate, carbon monoxide, methyl sulphide, acetate, formyl phosphate, carboxy phosphate, carbamate, carbamoyl phosphate, acetyl thioesters, acetyl phosphate, possibly carbonyl sulphide and eventually pterins. Carbon might have entered early metabolism via reactions hardly different from those in the modern Wood-Ljungdahl pathway, the pyruvate synthase reaction and the incomplete reverse citric acid cycle. The key energy-rich intermediates were perhaps acetyl thioesters, with acetyl phosphate possibly serving as the universal metabolic energy currency prior to the origin of genes. Nitrogen might have entered metabolism as geochemical NH3 via two routes: the synthesis of carbamoyl phosphate and reductive transaminations of alpha-keto acids. Together with intermediates of methyl synthesis, these two routes of nitrogen assimilation would directly supply all intermediates of modern purine and pyrimidine biosynthesis. Thermodynamic considerations related to formyl pterin synthesis suggest that the ability to harness a naturally pre-existing proton gradient at the vent-ocean interface via an ATPase is older than the ability to generate a proton gradient with chemistry that is specified by genes.
 
arg-fallbackName="Elshamah"/>
The awe inspiring spliceosome, the most complex macromolecul

The awe inspiring spliceosome, the most complex macromolecular machines known, and pre-mRNA processing in eukaryotic cells

http://reasonandscience.heavenforum.org/t2180-the-spliceosome-the-splicing-code-and-pre-mrna-processing-in-eukaryotic-cells

Along the way to make proteins in eukaryotic cells,  there is a whole chain of subsequent events that must all be fully operational, and the machinery in place, in order to get the functional product, that is  proteins. At the beginning of the process, DNA is transcribed in the RNA polymerase molecular machine, to yield messenger RNA ( mRNA ) , which afterwards must go through post transcriptional modifications. That is CAPPING,  ELONGATION,  SPLICING, CLEAVAGE,POLYADENYLATION AND TERMINATION , before it can be EXPORTED FROM THE NUCLEUS TO THE CYTOSOL,  and PROTEIN SYNTHESIS INITIATED, (TRANSLATION), and  COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING.

Bacterial mRNAs are synthesized by the RNA polymerase starting and stopping at specific spots on the genome. The situation in eukaryotes is substantially different. In particular, transcription is only the first of several steps needed to produce a mature mRNA molecule. The mature transcript for many genes is encoded in a discontinuous manner in a series of discrete exons, which are separated from each other along the DNA strand by non-coding introns. mRNAs, rRNAs, and tRNAs can all contain introns that must be removed from precursor RNAs to produce functional molecules.The formidable task of identifying and splicing together exons among all the intronic RNA is performed by a large ribonucleoprotein machine, the spliceosome, which is composed of several individual small nuclear ribonucleoproteins,  five snRNPs,  pronounced ‘snurps’, (U1, U2, U4, U5, and U6) each containing an RNA molecule called an snRNA that is usually 100–300 nucleotides long, plus additional protein factors that recognize specific sequences in the mRNA or promote conformational rearrangements in the spliceosome required for the splicing reaction to progress, and many more additional proteins that come and go during the splicing reaction.  It has been described as one of "the most complex macromolecular machines known," "composed of as many as 300 distinct proteins and five RNAs".

The snRNAs perform many of the spliceosome’s mRNA recognition events. Splice site consensus sequences are recognized by non-snRNP factors; the branch-point sequence is recognized by the branch-point-binding protein (BBP), and the polypyrimidine tract and 3′ splice site are bound by two specific protein components of a splicing complex referred to as U2AF (U2 auxiliary factor), U2AF65 and U2AF35, respectively.

This is one more great example of a amazingly complex molecular machine, that will operate and exercise its precise orchestrated function properly ONLY with ALL components fully developed and formed and able to interact in a highly complex, ordered , precise manner. Both, the software, and the hardware, must be in place fully developed, or the mechanism will not work. No intermediate stage will do the job. And neither would  snRNPs (U1, U2, U4, U5, and U6) have any function if not fully developed. And even if they were there, without the branch-point-binding protein (BBP) in place, nothing done, either, since the correct splice site could not be recognized. Had the introns and exons not have to emerge simultaneously with the Spliceosome ? No wonder, does the paper : " Origin and evolution of spliceosomal introns " admit:  Evolution of exon-intron structure of eukaryotic genes has been a matter of long-standing, intensive debate. 1 and it  concludes that : The elucidation of the general scenario of evolution of eukaryote gene architecture by no account implies that the main problems in the study of intron evolution and function have been solved. Quite the contrary, fundamental questions remains wide open. If the first evolutionary step would have been the arise of  self-splicing Group II introns, then the question would follow : Why would evolution not have stopped there, since that method works just fine ? 


There is no credible road map, how introns and exons, and  the splice function could have emerged gradually. What good would the spliceosome be good for, if the essential sequence elements to recognise where to slice would not be in place ? What would happen, if the pre mRNA with exons and introns were in place, but no spliceosome ready in place to do the post transcriptional modification, and neither the splicing code, which directs the way where to splice ?  In the article : ‘JUNK’ DNA HIDES ASSEMBLY INSTRUCTIONS, the author,  Wang,  observes that splicing "is a tightly regulated process, and a great number of  diseases are caused by the 'misregulation' of splicing in which the gene was not cut and pasted correctly." Missplicing in the cell can have dire consequences as the desired product is not produced, and often the wrong products can be toxic for the cell. For this reason, it  has been proposed that  ATPases are important for ‘proofreading’ mechanisms that promote fidelity in splice site selection. In his textbook Essentials of Molecular Biology, George Malacinski points out why proper polypeptide production is critical:

"A cell cannot, of course, afford to miss any of the splice junctions by even a single nucleotide, because this could result in an interruption of the correct reading frame, leading to a truncated protein." 


The required precision is quite amazing, and even more astounding is the fact that these incredibly complex molecular machines are able and capable to do the Job in the precise manner as needed. 

Following the binding of these initial components, the remainder of the splicing apparatus assembles around them, in some cases displacing some of the previously bound components.

Question: How could the information to assemble the splicing apparatus correctly have emerged gradually ? In order to do so, had the assembly parts not have to be there, at the assembly site, fully developed, and ready for recruitment?  Had the availability of these parts not have  to be synchronized so that at some point, either individually or in combination, they were all available at the same time ? Had the assembly not have to be coordinated in the right way right from the start ? Had the parts not have to be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’ ? even if sub systems or parts are put together in the right order, they also need to interface correctly.


Is it feasable that this complex machine were the result of a progressive evolutionary development, in which simple molecules are the start of the biosynthesis chain and are then progressively developed in sequencial steps, if the end goal is not known by the process and mechanism promoting the development ?  How could  each intermediate in the pathway be a end point in the pathway, if that end point had no function ? Did not  each intermediate have to be usable in the past as an end product ? And how could the be usable, if the amino acid sequence chain had only a fraction of the fully developed sequence ? How could successive steps be added to improve the efficiency of a product where there was no use for it at this stage ?  Despite the fact that proponents of naturalism embrace this kind of scenario, it seems obvious that is extremely unlikely to be possible that way.

Martin and Koonin admit in their paper  “Hypothesis: Introns and the origin of nucleus-cytosol compartmentalization,”:  The transition to spliceosome-dependent splicing will also impose an unforgiving demand for inventions in addition to the spliceosome. And furthermore: More recent are the insights that there is virtually no evolutionary grade detectable in the origin of the spliceosome, which apparently was present in its (almost) fully fledged state in the common ancestor of eukaryotic lineages studied so far. Thats a surprising admittance.

This means that  the spliceosome  appeared fully formed almost abruptly, and that the intron invasion took place over a short time and has not changed for supposedly hundreds of millions of years.

In another interesting paper : Breaking the second genetic code, the authors write 2 :  The genetic instructions of complex organisms exhibit a counter-intuitive feature not shared by simpler genomes: nucleotide sequences coding for a protein (exons) are interrupted by other nucleotide regions that seem to hold no information (introns). This bizarre organization of genetic messages forces cells to remove introns from the precursor mRNA (pre-mRNA) and then splice together the exons to generate translatable instructions. An advantage of this mechanism is that it allows different cells to choose alternative means of pre-mRNA splicing and thus generates diverse messages from a single gene. The variant mRNAs can then encode different proteins
with distinct functions. One difficulty with understanding alternative pre-mRNA splicing is that the selection of particular exons in mature mRNAs is determined not only by intron sequences adjacent to the exon boundaries, but also by a multitude of other sequence elements present in both exons and introns. These auxiliary sequences are recognized by regulatory factors that assist or prevent the function of the spliceosome — the molecular machinery in charge of intron removal.

Moreover, coupling between RNA processing and gene transcription influences alternative splicing, and recent data implicate the packing of DNA with histone proteins and histone covalent modifications — the epigenetic code — in the regulation of splicing. The interplay between the histone and the splicing codes will therefore need to be accurately formulated in future approaches. 

Question: How could natural mechanisms have provided  the tuning, synchronization and coordination  between the histone and the splicing codes ? First, these two codes and the carrier proteins and molecules ( the hardware and software ) would have to emerge by themself, and in a second step orchestrate  their coordination. Why is it reasonable to believe, that unguided, random chemical reactions would be capable of emerging with  the immensly complex organismal functions ? 

Fazale Rana puts it nicely :  Astounding is the fact that other codes, such as the histone binding code, transcription factor binding code, the splicing code, and the RNA secondary structure code, overlap the genetic code. Each of these codes plays a special role in gene expression, but they also must work together in a coherent integrated fashion.
 
arg-fallbackName="Mr_Wilford"/>
:facepalm:

Evolutionary processes have been observed to make irreducibly complex systems. This has been shown to you, by Rumraket, several times.

Taking this into account, there's no reason to even read your plagiarized spam.
 
arg-fallbackName="Elshamah"/>
itsdemtitans said:
:facepalm:

Evolutionary processes have been observed to make irreducibly complex systems. This has been shown to you, by Rumraket, several times.

Taking this into account, there's no reason to even read your plagiarized spam.

no plagiarized spam. Thats my article, from my personal virtual library. And Rumraket has not debunked anything so far. Your blind faith is telling.
 
arg-fallbackName="SpecialFrog"/>
Elshamah said:
no plagiarized spam. Thats my article, from my personal virtual library.

Elshamah said:
Bacterial mRNAs are synthesized by the RNA polymerase starting and stopping at specific spots on the genome. The situation in eukaryotes is substantially different. In particular, transcription is only the first of several steps needed to produce a mature mRNA molecule.

Molecular Biology of the Cell said:
We have seen that bacterial mRNAs are synthesized by the RNA polymerase starting and stopping at specific spots on the genome. The situation in eukaryotes is substantially different. In particular, transcription is only the first of several steps needed to produce a mature mRNA molecule.

Liar, liar, pants on fire.
 
arg-fallbackName="Mr_Wilford"/>
SpecialFrog said:
Elshamah said:
no plagiarized spam. Thats my article, from my personal virtual library.

Elshamah said:
Bacterial mRNAs are synthesized by the RNA polymerase starting and stopping at specific spots on the genome. The situation in eukaryotes is substantially different. In particular, transcription is only the first of several steps needed to produce a mature mRNA molecule.

Molecular Biology of the Cell said:
We have seen that bacterial mRNAs are synthesized by the RNA polymerase starting and stopping at specific spots on the genome. The situation in eukaryotes is substantially different. In particular, transcription is only the first of several steps needed to produce a mature mRNA molecule.

Liar, liar, pants on fire.

:lol:
 
arg-fallbackName="DutchLiam84"/>
Re: The awe inspiring spliceosome, the most complex macromol

Elshamah said:
Thats my article
:lol:

I can press Ctrl-c and Ctrl-v too, now it's MY article, just look below!!!!!!!!!!!!!!! Everything from your website is now mine Elshamah, I copy-pasted it fair and square.
ME!!!!! said:
The awe inspiring spliceosome, the most complex macromolecular machines known, and pre-mRNA processing in eukaryotic cells

http://reasonandscience.heavenforum.org/t2180-the-spliceosome-the-splicing-code-and-pre-mrna-processing-in-eukaryotic-cells

Along the way to make proteins in eukaryotic cells,  there is a whole chain of subsequent events that must all be fully operational, and the machinery in place, in order to get the functional product, that is  proteins. At the beginning of the process, DNA is transcribed in the RNA polymerase molecular machine, to yield messenger RNA ( mRNA ) , which afterwards must go through post transcriptional modifications. That is CAPPING,  ELONGATION,  SPLICING, CLEAVAGE,POLYADENYLATION AND TERMINATION , before it can be EXPORTED FROM THE NUCLEUS TO THE CYTOSOL,  and PROTEIN SYNTHESIS INITIATED, (TRANSLATION), and  COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING.

Bacterial mRNAs are synthesized by the RNA polymerase starting and stopping at specific spots on the genome. The situation in eukaryotes is substantially different. In particular, transcription is only the first of several steps needed to produce a mature mRNA molecule. The mature transcript for many genes is encoded in a discontinuous manner in a series of discrete exons, which are separated from each other along the DNA strand by non-coding introns. mRNAs, rRNAs, and tRNAs can all contain introns that must be removed from precursor RNAs to produce functional molecules.The formidable task of identifying and splicing together exons among all the intronic RNA is performed by a large ribonucleoprotein machine, the spliceosome, which is composed of several individual small nuclear ribonucleoproteins,  five snRNPs,  pronounced ‘snurps’, (U1, U2, U4, U5, and U6) each containing an RNA molecule called an snRNA that is usually 100–300 nucleotides long, plus additional protein factors that recognize specific sequences in the mRNA or promote conformational rearrangements in the spliceosome required for the splicing reaction to progress, and many more additional proteins that come and go during the splicing reaction.  It has been described as one of "the most complex macromolecular machines known," "composed of as many as 300 distinct proteins and five RNAs".

The snRNAs perform many of the spliceosome’s mRNA recognition events. Splice site consensus sequences are recognized by non-snRNP factors; the branch-point sequence is recognized by the branch-point-binding protein (BBP), and the polypyrimidine tract and 3′ splice site are bound by two specific protein components of a splicing complex referred to as U2AF (U2 auxiliary factor), U2AF65 and U2AF35, respectively.

This is one more great example of a amazingly complex molecular machine, that will operate and exercise its precise orchestrated function properly ONLY with ALL components fully developed and formed and able to interact in a highly complex, ordered , precise manner. Both, the software, and the hardware, must be in place fully developed, or the mechanism will not work. No intermediate stage will do the job. And neither would  snRNPs (U1, U2, U4, U5, and U6) have any function if not fully developed. And even if they were there, without the branch-point-binding protein (BBP) in place, nothing done, either, since the correct splice site could not be recognized. Had the introns and exons not have to emerge simultaneously with the Spliceosome ? No wonder, does the paper : " Origin and evolution of spliceosomal introns " admit:  Evolution of exon-intron structure of eukaryotic genes has been a matter of long-standing, intensive debate. 1 and it  concludes that : The elucidation of the general scenario of evolution of eukaryote gene architecture by no account implies that the main problems in the study of intron evolution and function have been solved. Quite the contrary, fundamental questions remains wide open. If the first evolutionary step would have been the arise of  self-splicing Group II introns, then the question would follow : Why would evolution not have stopped there, since that method works just fine ? 


There is no credible road map, how introns and exons, and  the splice function could have emerged gradually. What good would the spliceosome be good for, if the essential sequence elements to recognise where to slice would not be in place ? What would happen, if the pre mRNA with exons and introns were in place, but no spliceosome ready in place to do the post transcriptional modification, and neither the splicing code, which directs the way where to splice ?  In the article : ‘JUNK’ DNA HIDES ASSEMBLY INSTRUCTIONS, the author,  Wang,  observes that splicing "is a tightly regulated process, and a great number of  diseases are caused by the 'misregulation' of splicing in which the gene was not cut and pasted correctly." Missplicing in the cell can have dire consequences as the desired product is not produced, and often the wrong products can be toxic for the cell. For this reason, it  has been proposed that  ATPases are important for ‘proofreading’ mechanisms that promote fidelity in splice site selection. In his textbook Essentials of Molecular Biology, George Malacinski points out why proper polypeptide production is critical:

"A cell cannot, of course, afford to miss any of the splice junctions by even a single nucleotide, because this could result in an interruption of the correct reading frame, leading to a truncated protein." 


The required precision is quite amazing, and even more astounding is the fact that these incredibly complex molecular machines are able and capable to do the Job in the precise manner as needed. 

Following the binding of these initial components, the remainder of the splicing apparatus assembles around them, in some cases displacing some of the previously bound components.

Question: How could the information to assemble the splicing apparatus correctly have emerged gradually ? In order to do so, had the assembly parts not have to be there, at the assembly site, fully developed, and ready for recruitment?  Had the availability of these parts not have  to be synchronized so that at some point, either individually or in combination, they were all available at the same time ? Had the assembly not have to be coordinated in the right way right from the start ? Had the parts not have to be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’ ? even if sub systems or parts are put together in the right order, they also need to interface correctly.


Is it feasable that this complex machine were the result of a progressive evolutionary development, in which simple molecules are the start of the biosynthesis chain and are then progressively developed in sequencial steps, if the end goal is not known by the process and mechanism promoting the development ?  How could  each intermediate in the pathway be a end point in the pathway, if that end point had no function ? Did not  each intermediate have to be usable in the past as an end product ? And how could the be usable, if the amino acid sequence chain had only a fraction of the fully developed sequence ? How could successive steps be added to improve the efficiency of a product where there was no use for it at this stage ?  Despite the fact that proponents of naturalism embrace this kind of scenario, it seems obvious that is extremely unlikely to be possible that way.

Martin and Koonin admit in their paper  “Hypothesis: Introns and the origin of nucleus-cytosol compartmentalization,”:  The transition to spliceosome-dependent splicing will also impose an unforgiving demand for inventions in addition to the spliceosome. And furthermore: More recent are the insights that there is virtually no evolutionary grade detectable in the origin of the spliceosome, which apparently was present in its (almost) fully fledged state in the common ancestor of eukaryotic lineages studied so far. Thats a surprising admittance.

This means that  the spliceosome  appeared fully formed almost abruptly, and that the intron invasion took place over a short time and has not changed for supposedly hundreds of millions of years.

In another interesting paper : Breaking the second genetic code, the authors write 2 :  The genetic instructions of complex organisms exhibit a counter-intuitive feature not shared by simpler genomes: nucleotide sequences coding for a protein (exons) are interrupted by other nucleotide regions that seem to hold no information (introns). This bizarre organization of genetic messages forces cells to remove introns from the precursor mRNA (pre-mRNA) and then splice together the exons to generate translatable instructions. An advantage of this mechanism is that it allows different cells to choose alternative means of pre-mRNA splicing and thus generates diverse messages from a single gene. The variant mRNAs can then encode different proteins
with distinct functions. One difficulty with understanding alternative pre-mRNA splicing is that the selection of particular exons in mature mRNAs is determined not only by intron sequences adjacent to the exon boundaries, but also by a multitude of other sequence elements present in both exons and introns. These auxiliary sequences are recognized by regulatory factors that assist or prevent the function of the spliceosome — the molecular machinery in charge of intron removal.

Moreover, coupling between RNA processing and gene transcription influences alternative splicing, and recent data implicate the packing of DNA with histone proteins and histone covalent modifications — the epigenetic code — in the regulation of splicing. The interplay between the histone and the splicing codes will therefore need to be accurately formulated in future approaches. 

Question: How could natural mechanisms have provided  the tuning, synchronization and coordination  between the histone and the splicing codes ? First, these two codes and the carrier proteins and molecules ( the hardware and software ) would have to emerge by themself, and in a second step orchestrate  their coordination. Why is it reasonable to believe, that unguided, random chemical reactions would be capable of emerging with  the immensly complex organismal functions ? 

Fazale Rana puts it nicely :  Astounding is the fact that other codes, such as the histone binding code, transcription factor binding code, the splicing code, and the RNA secondary structure code, overlap the genetic code. Each of these codes plays a special role in gene expression, but they also must work together in a coherent integrated fashion.

Disclaimer: The opinions written in this article by me, me as in the author of this piece, do not reflect what I actually think about the subject.

p.s. Did I mention I wrote this? It's mine!
 
arg-fallbackName="Rumraket"/>
Re: Proteintransport into Mitochondria is irreducible comple

Elshamah said:
voyage dans la mayonnaise much, Rumraket ??

your whole post is bunk.....
You have failed to respond to any part of it at all.
 
arg-fallbackName="Rumraket"/>
Re: The awe inspiring spliceosome, the most complex macromol

Elshamah said:
The awe inspiring spliceosome, the most complex macromolecular machines known, and pre-mRNA processing in eukaryotic cells

http://reasonandscience.heavenforum.org/t2180-the-spliceosome-the-splicing-code-and-pre-mrna-processing-in-eukaryotic-cells

Along the way to make proteins in eukaryotic cells,  there is a whole chain of subsequent events that must all be fully operational, and the machinery in place, in order to get the functional product, that is  proteins. At the beginning of the process, DNA is transcribed in the RNA polymerase molecular machine, to yield messenger RNA ( mRNA ) , which afterwards must go through post transcriptional modifications. That is CAPPING,  ELONGATION,  SPLICING, CLEAVAGE,POLYADENYLATION AND TERMINATION , before it can be EXPORTED FROM THE NUCLEUS TO THE CYTOSOL,  and PROTEIN SYNTHESIS INITIATED, (TRANSLATION), and  COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING.

Bacterial mRNAs are synthesized by the RNA polymerase starting and stopping at specific spots on the genome. The situation in eukaryotes is substantially different. In particular, transcription is only the first of several steps needed to produce a mature mRNA molecule. The mature transcript for many genes is encoded in a discontinuous manner in a series of discrete exons, which are separated from each other along the DNA strand by non-coding introns. mRNAs, rRNAs, and tRNAs can all contain introns that must be removed from precursor RNAs to produce functional molecules.The formidable task of identifying and splicing together exons among all the intronic RNA is performed by a large ribonucleoprotein machine, the spliceosome, which is composed of several individual small nuclear ribonucleoproteins,  five snRNPs,  pronounced ‘snurps’, (U1, U2, U4, U5, and U6) each containing an RNA molecule called an snRNA that is usually 100–300 nucleotides long, plus additional protein factors that recognize specific sequences in the mRNA or promote conformational rearrangements in the spliceosome required for the splicing reaction to progress, and many more additional proteins that come and go during the splicing reaction.  It has been described as one of "the most complex macromolecular machines known," "composed of as many as 300 distinct proteins and five RNAs".

The snRNAs perform many of the spliceosome’s mRNA recognition events. Splice site consensus sequences are recognized by non-snRNP factors; the branch-point sequence is recognized by the branch-point-binding protein (BBP), and the polypyrimidine tract and 3′ splice site are bound by two specific protein components of a splicing complex referred to as U2AF (U2 auxiliary factor), U2AF65 and U2AF35, respectively.

This is one more great example of a amazingly complex molecular machine, that will operate and exercise its precise orchestrated function properly ONLY with ALL components fully developed and formed and able to interact in a highly complex, ordered , precise manner. Both, the software, and the hardware, must be in place fully developed, or the mechanism will not work. No intermediate stage will do the job. And neither would  snRNPs (U1, U2, U4, U5, and U6) have any function if not fully developed. And even if they were there, without the branch-point-binding protein (BBP) in place, nothing done, either, since the correct splice site could not be recognized. Had the introns and exons not have to emerge simultaneously with the Spliceosome ? No wonder, does the paper : " Origin and evolution of spliceosomal introns " admit:  Evolution of exon-intron structure of eukaryotic genes has been a matter of long-standing, intensive debate. 1 and it  concludes that : The elucidation of the general scenario of evolution of eukaryote gene architecture by no account implies that the main problems in the study of intron evolution and function have been solved. Quite the contrary, fundamental questions remains wide open. If the first evolutionary step would have been the arise of  self-splicing Group II introns, then the question would follow : Why would evolution not have stopped there, since that method works just fine ? 


There is no credible road map, how introns and exons, and  the splice function could have emerged gradually. What good would the spliceosome be good for, if the essential sequence elements to recognise where to slice would not be in place ? What would happen, if the pre mRNA with exons and introns were in place, but no spliceosome ready in place to do the post transcriptional modification, and neither the splicing code, which directs the way where to splice ?  In the article : ‘JUNK’ DNA HIDES ASSEMBLY INSTRUCTIONS, the author,  Wang,  observes that splicing "is a tightly regulated process, and a great number of  diseases are caused by the 'misregulation' of splicing in which the gene was not cut and pasted correctly." Missplicing in the cell can have dire consequences as the desired product is not produced, and often the wrong products can be toxic for the cell. For this reason, it  has been proposed that  ATPases are important for ‘proofreading’ mechanisms that promote fidelity in splice site selection. In his textbook Essentials of Molecular Biology, George Malacinski points out why proper polypeptide production is critical:

"A cell cannot, of course, afford to miss any of the splice junctions by even a single nucleotide, because this could result in an interruption of the correct reading frame, leading to a truncated protein." 


The required precision is quite amazing, and even more astounding is the fact that these incredibly complex molecular machines are able and capable to do the Job in the precise manner as needed. 

Following the binding of these initial components, the remainder of the splicing apparatus assembles around them, in some cases displacing some of the previously bound components.

Question: How could the information to assemble the splicing apparatus correctly have emerged gradually ? In order to do so, had the assembly parts not have to be there, at the assembly site, fully developed, and ready for recruitment?  Had the availability of these parts not have  to be synchronized so that at some point, either individually or in combination, they were all available at the same time ? Had the assembly not have to be coordinated in the right way right from the start ? Had the parts not have to be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’ ? even if sub systems or parts are put together in the right order, they also need to interface correctly.


Is it feasable that this complex machine were the result of a progressive evolutionary development, in which simple molecules are the start of the biosynthesis chain and are then progressively developed in sequencial steps, if the end goal is not known by the process and mechanism promoting the development ?  How could  each intermediate in the pathway be a end point in the pathway, if that end point had no function ? Did not  each intermediate have to be usable in the past as an end product ? And how could the be usable, if the amino acid sequence chain had only a fraction of the fully developed sequence ? How could successive steps be added to improve the efficiency of a product where there was no use for it at this stage ?  Despite the fact that proponents of naturalism embrace this kind of scenario, it seems obvious that is extremely unlikely to be possible that way.

Martin and Koonin admit in their paper  “Hypothesis: Introns and the origin of nucleus-cytosol compartmentalization,”:  The transition to spliceosome-dependent splicing will also impose an unforgiving demand for inventions in addition to the spliceosome. And furthermore: More recent are the insights that there is virtually no evolutionary grade detectable in the origin of the spliceosome, which apparently was present in its (almost) fully fledged state in the common ancestor of eukaryotic lineages studied so far. Thats a surprising admittance.

This means that  the spliceosome  appeared fully formed almost abruptly, and that the intron invasion took place over a short time and has not changed for supposedly hundreds of millions of years.

In another interesting paper : Breaking the second genetic code, the authors write 2 :  The genetic instructions of complex organisms exhibit a counter-intuitive feature not shared by simpler genomes: nucleotide sequences coding for a protein (exons) are interrupted by other nucleotide regions that seem to hold no information (introns). This bizarre organization of genetic messages forces cells to remove introns from the precursor mRNA (pre-mRNA) and then splice together the exons to generate translatable instructions. An advantage of this mechanism is that it allows different cells to choose alternative means of pre-mRNA splicing and thus generates diverse messages from a single gene. The variant mRNAs can then encode different proteins
with distinct functions. One difficulty with understanding alternative pre-mRNA splicing is that the selection of particular exons in mature mRNAs is determined not only by intron sequences adjacent to the exon boundaries, but also by a multitude of other sequence elements present in both exons and introns. These auxiliary sequences are recognized by regulatory factors that assist or prevent the function of the spliceosome — the molecular machinery in charge of intron removal.

Moreover, coupling between RNA processing and gene transcription influences alternative splicing, and recent data implicate the packing of DNA with histone proteins and histone covalent modifications — the epigenetic code — in the regulation of splicing. The interplay between the histone and the splicing codes will therefore need to be accurately formulated in future approaches. 

Question: How could natural mechanisms have provided  the tuning, synchronization and coordination  between the histone and the splicing codes ? First, these two codes and the carrier proteins and molecules ( the hardware and software ) would have to emerge by themself, and in a second step orchestrate  their coordination. Why is it reasonable to believe, that unguided, random chemical reactions would be capable of emerging with  the immensly complex organismal functions ? 

Fazale Rana puts it nicely :  Astounding is the fact that other codes, such as the histone binding code, transcription factor binding code, the splicing code, and the RNA secondary structure code, overlap the genetic code. Each of these codes plays a special role in gene expression, but they also must work together in a coherent integrated fashion.
Already been over this. Same fundamental mistake you make every time. Evolution demonstrably produces multi-component irreducibly complex structures, in fact we predict they will emerge through the evolutionary process and we have seen it happen in experiment without any guidance or design.
Rumraket said:
Irreducible complexity is not a successful argument against evolution for reasons already stated in your three other threads.

In fact we have observed the origin of an irreducibly complex pathway for the utilization of citrate under aerobic conditions in Richard Lenski's long-term evolution experiment with E coli.

A gene duplication spawned a copy of the citrate transporter in vicinity of a regulatory element that is only active under aerobic conditions. This allows the cells to use citrate when oxygen is present, which they normally cannot do.

If you remove the duplicate gene, the cell can no longer use citrate with oxygen present. If you remove the regulatory element, the citrate transporter fails to activate when oxygen is present, and the cell cannot use citrate and will die if there is no other food available. So there you go, a two-component, irreducibly complex system that requires both components to be present to work. If you remove one of the components, the system stops working. So it is irreducibly complex and it evolved.

If it is irreducibly complex it can still evolve. In fact we expect that the evolutionary process will create irreducibly complex structures. Do you understand this? If evolution is true, there should be irreducibly complex structures in living organism.
 
arg-fallbackName="Dragan Glas"/>
Greetings,
Elshamah said:
itsdemtitans said:
:facepalm:

Evolutionary processes have been observed to make irreducibly complex systems. This has been shown to you, by Rumraket, several times.

Taking this into account, there's no reason to even read your plagiarized spam.
no plagiarized spam.
On the contrary, you've plagiarised it from several articles.
Elshamah said:
Thats my article, from my personal virtual library.
No, it isn't "my article".

Kindly show us which word(s) is/are yours?
Elshamah said:
And Rumraket has not debunked anything so far.
Again, on the contrary, you've yet to prove that anything you've plagiarised from other sources disproves (the theory of} evolution.

Rumraket - and others - have shown that your premise is a fallacy.

You've yet to prove anything.
Elshamah said:
Your blind faith is telling.
Again, on the contrary, your blind faith is telling.

Kindest regards,

James
 
arg-fallbackName="he_who_is_nobody"/>
SpecialFrog said:
Elshamah said:
no plagiarized spam. Thats my article, from my personal virtual library.

Elshamah said:
Bacterial mRNAs are synthesized by the RNA polymerase starting and stopping at specific spots on the genome. The situation in eukaryotes is substantially different. In particular, transcription is only the first of several steps needed to produce a mature mRNA molecule.

Molecular Biology of the Cell said:
We have seen that bacterial mRNAs are synthesized by the RNA polymerase starting and stopping at specific spots on the genome. The situation in eukaryotes is substantially different. In particular, transcription is only the first of several steps needed to produce a mature mRNA molecule.

Liar, liar, pants on fire.

[sarcasm]Well, I guess it is a good thing that Elshamah does not believe in a god that would punish liars.[/sarcasm]
 
arg-fallbackName="Mr_Wilford"/>
Just what is it with creationists and their constant assertions rebuttals of their posts are wrong, yet refusing to explain further? B.V did it, Jerome at ratskep did it, and now I see Elshamah is doing it.

Of course, it's easier to assert your opponent is wrong than actually prove that point, I assume.
 
arg-fallbackName="Elshamah"/>
The astonishing  language written on microtubules, amazing evidence of  design

http://reasonandscience.heavenforum.org/t2096-the-astonishing-language-written-on-microtubules-amazing-evidence-of-design

Following information is truly mind boggling. Take your time to read all through, and check the links. The creator of life has left a wealth of evidence for his existence in creation. A treasure grove  to evidence intelligent design is every living cell. Its widely known that DNA is a advanced information storage device, encoding complex specified information to make proteins and directing many highly complex processes in the cell. What is less known, is that there are several other code systems as well, namely the histone binding code, transcription factor binding code, the splicing code, and the RNA secondary structure code. And there is  another astonishing code system, called the tubuline code, which is being unravelled in recent scientific research. It is known so far that amongst other things, it directs and signals Kinesin and Myosin motor proteins precisely where and when to disengage from nanomolecular superhighways and deliver their cargo.

http://reasonandscience.heavenforum.org/t1448-kinesin-motor-proteins-amazing-cargo-carriers-in-the-cell?highlight=kinesin

Recent research helds that this code in a amazing manner even stores our memories in the brain and makes them available on the long therm.

http://reasonandscience.heavenforum.org/t2182-heres-an-incredible-idea-for-how-memory-works#4032

For cells to function properly, they must organize themselves and interact mechanically with each other and with their environment. They have to be correctly shaped, physically robust, and properly structured internally. Many have to change their shape and move from place to place. All cells have to be able to rearrange their internal components as they grow, divide, and adapt to changing circumstances. These spatial and mechanical functions depend on a remarkable system of filaments called the cytoskeleton. The cytoskeleton’s varied functions depend on the behavior of three families of protein filaments—actin filaments, microtubules, and intermediate filaments. Microtubules are very important in a number of cellular processes. They are involved in maintaining the structure of the cell and provide a platform for intracellular macromolecular assemblies through dynein and kinesin motors. They are also involved in chromosome separation (mitosis and meiosis), and are the major constituents of mitotic spindles, which are used to pull apart eukaryotic chromosomes. Mitotic cell division is the most fundamental task of all living cells. Cells have intricate and tightly regulated machinery to ensure that mitosis occurs with appropriate frequency and high fidelity. If someone wants to explain the origin of eukaryotic cells, the arise of mitosis and its mechanism and involved cell organelles and proteins must be elucidated. The  centrosome plays a crucial role : it functions as the major microtubule-organizing center and plays a vital role in guiding chromosome segregation during mitosis. In the centrosome, two centrioles reside at right angles to each other, connected at one end by fibers.
These architecturally perfect structures are essential in many animal cells and plants (though not in flowering plants or fungi, or in prokaryotes). They help organize the centrosomes, whose spindles of microtubules during cell division reach out to the lined-up chromosomes and pull them into the daughter cells.

http://reasonandscience.heavenforum.org/t2090-centriole-centrosome-the-centriole-spindle-the-most-complex-machine-known-in-nature?highlight=spindle

α- and β-tubulin heterodimers  are the structural subunits of microtubules. The structure is divided in  the amino-terminal domain containing the nucleotide-binding region, an intermediate domain containing the Taxol-binding site, and the carboxy-terminal domain, which probably constitutes the binding surface for motor proteins. Unless all 3 functional domais were fully functional right from the beginning,  tubulins would have no useful function. There would be no reason for the Taxol-binding site to be without motor proteins existing. Dynamic instability, the stochastic switching between growth and shrinkage, is essential for microtubule function.

http://reasonandscience.heavenforum.org/t2096-the-cytoskeleton-microtubules-and-post-translational-modification#4033

Microtubule dynamics inside the cell are governed by a variety of proteins that bind tubulin dimers or microtubules. Proteins that bind to microtubules are collectively called microtubule-associated proteins, or MAPs.The MAP family includes large proteins like MAP-1A, MAP-1B, MAP-1C, MAP-2, and MAP-4 and smaller components like tau and MAP-2C.

This is highly relevant. Microtubules depend on microtubule-associated proteins for proper function. Interdependence is a hallmark of intelligent design, and strong evidence that both, microtubules, and MAP's had to emerge together, at the same time, since one depends on the other for proper function. But more than that. Microtubules are essential to form the cytoskeleton, which is essential for cell shape and structure. In a few words, No MAP's, no proper function of microtubules. No microtubules, no proper function of the cytoskeleton. No cytoskeleton, no proper functioning cell. Evidence is very strong, that all these elements had to arise together at once. Kinesin and Dynein belong to MAP proteins. Kinesin-13 proteins  contribute to microtubule depolymerizing activity to the centrosome and centromere  during mitosis. These activities have been shown to be essential for spindle morphogenesis and chromosome segregation.  A step-wise  evolutionary emergence of eukaryotic cells is not feasable since several parts of the call can only work if interacting together in a interlocked fully developed system.

When incorporated into microtubules, tubulin accumulates a number of post-translational modifications, many of which are unique to these proteins. These modifications include detyrosination, acetylation, polyglutamylation, polyglycylation,phosphorylation, ubiquitination, sumoylation, and palmitoylation. The α- and β-tubulin heterodimer undergoes multiple post-translational modifications (PTMs). The modified tubulin subunits are non-uniformly distributed along microtubules. Analogous to the model of the ‘histone code’ on chromatin, diverse PTMs are proposed to form a biochemical ‘tubulin code’ that can be ‘read’ by factors that interact with microtubules .

This is a relevant and amazing fact , and raises the question of how the " tubulin code " beside the several other codes in the cell emerged. In my view, once more this shows that intelligence was required to create these amazing biomolecular structures;  formation of  coded information has always shown to be able only to be produced by intelligent minds. What good would the tubulin code be for, if no specific goal was forseen, that is, it acts as emitter of information , and if there is no destination and receiver  of the information, there is no reason of the code to arise in the first place. So both, sender and receiver, must exist first as hardware, that is the microtubules with the post transcriptional modified tubulin units in a specified coded conformation, and the the receiver, which can be MAP's in general, or Kinesin or Myosin motor proteins, which are directed to the right destination to fullfill specific tasks, or other proteins directed for specific jobs.

Taken together, multiple and complex tubulin PTMs provide a myriad of combinatorial possibilities to specifically ‘tag’ microtubule subpopulations in cells, thus destining them for precise functions. How this tubulin or microtubule code allows cells to divide, migrate, communicate and differentiate in an ordered manner is an exciting question that needs to be answered in the near future. Initial insights have already revealed the potential roles of tubulin PTMs in a number of human pathologies, like cancer, neurodegeneration and ciliopathies. This raises the question : If PTM's are not precise and fully functioning, they cause deseases. What about if the MAP's are not fully specified and evolved ? There is a threshold , a dividing line between a non functional protein - amino acid sequence that is non functional, and when it has enough residues to fold properly and become functional. How proteins arose in the first place is a mistery for proponents of natural mechanisms..... Not only does it have to be elucidated how this tubulin or microtubule code allows cells to do all these tasks, but also what explains best its arising and encoding. Most of these enzymes are specific to tubulin and microtubule post translational modifications. They have only use if microtubules exist. Microtubules however require these enzymes to modify their structures.  It can therefor be concluded that they are interdependent and could not arise independently by natural evolutionary mechanisms. 

An emerging hypothesis is that tubulin modifications specify a code that dictates biological outcomes through changes in higher-order microtubule structure and/or by recruiting and interacting with effector proteins. This hypothesis is analogous to the histone code hypothesis ‑ that modifications on core histones, acting in a combinatorial or sequential fashion, specify multiple functions of chromatin such as changes in higher-order chromatin structure or selective activation of transcription. The apparent parallels between these two types of structural frameworks, chromatin in the nucleus and microtubules in the cytoplasm, are intriguing 

Isn't that  striking evidence of a  common designer that invented both codes ? 

http://reasonandscience.heavenforum.org/t2096-the-cytoskeleton-microtubules-and-post-translational-modification#4035

Microtubules are typically nucleated and organized by dedicated organelles called microtubule-organizing centres (MTOCs). Contained within the MTOC is another type of tubulin, γ-tubulin, which is distinct from the α- and β-subunits of the microtubules themselves. The γ-tubulin combines with several other associated proteins to form a lock washer-like structure known as the γ-tubulin ring complex" (γ-TuRC). This complex acts as a template for α/β-tubulin dimers to begin polymerization; it acts as a cap of the (−) end while microtubule growth continues away from the MTOC in the (+) direction. The γ-tubulin small complex (γTuSC) is the conserved, essential core of the microtubule nucleating machinery, and it is found in nearly all eukaryotes.

This  γ-tubulin ring complex  is a striking example of purposeful design which is required to nucleate the microtubules into the right shape. There would be no function for the γ-tubulin ring complex to emerge without microtubules, since  it would have no function by its own. Furthermore, it is made of several subunits which are indispensable for proper use, that is for example the attachment factors, accessory proteins, and γ-tubulins, which constitute a irreducible γ-tubulins ring complex, made of several interlocked parts, which could not emerge by natural selection. The complex has only purposeful function when microtubules have to be asssembled. So the, γ-tubulins ring complex and microtubules are interdependent.

See its striking structure here :

http://reasonandscience.heavenforum.org/t2096-the-cytoskeleton-microtubules-and-post-translational-modification#4040

Here’s an Incredible Idea For How Memory Works

Cytoskeletal Signaling: Is Memory Encoded in Microtubule Lattices by CaMKII Phosphorylation?

how the brain could store information long-term has been something of a mystery. But now researchers have developed a very interesting idea of how the brain’s neurons could store information using, believe it or not, a binary encoding scheme based on phosphorylation:

Memory is attributed to strengthened synaptic connections among particular brain neurons, yet synaptic membrane components are transient, whereas memories can endure. This suggests synaptic information is encoded and ‘hard-wired’ elsewhere, e.g. at molecular levels within the post-synaptic neuron. In long-term potentiation (LTP), a cellular and molecular model for memory, post-synaptic calcium ion (Ca2+) flux activates the hexagonal Ca2+-calmodulin dependent kinase II (CaMKII), a dodacameric holoenzyme containing 2 hexagonal sets of 6 kinase domains.
This enzyme has a astonishing and remarkable configuration and functionality :

Each kinase domain can either phosphorylate substrate proteins, or not (i.e. encoding one bit). Thus each set of extended CaMKII kinases can potentially encode synaptic Ca2+ information via phosphorylation as ordered arrays of binary ‘bits’. Candidate sites for CaMKII phosphorylation-encoded molecular memory include microtubules (MTs), cylindrical organelles whose surfaces represent a regular lattice with a pattern of hexagonal polymers of the protein tubulin. Using molecular mechanics modeling and electrostatic profiling, we find that spatial dimensions and geometry of the extended CaMKII kinase domains precisely match those of MT hexagonal lattices. This suggests sets of six CaMKII kinase domains phosphorylate hexagonal MT lattice neighborhoods collectively, e.g. conveying synaptic information as ordered arrays of six “bits”, and thus “bytes”, with 64 to 5,281 possible bit states per CaMKII-MT byte. Signaling and encoding in MTs and other cytoskeletal structures offer rapid, robust solid-state information processing which may reflect a general code for MT-based memory and information processing within neurons and other eukaryotic cells.

Size and geometry of the activated hexagonal CaMKII holoenzyme and the two types of hexagonal lattices (A and B) in MTs are identical. 6 extended kinases can interface collectively with 6 tubulins

Is the precise interface matching striking coincidence, or purposeful design ? Either a intelligent , goal oriented creator made the correct size, where CaMKII would fit and match the hexagonal lattices, or that is the result of unguided, random, evolutionary processes. What explanation makes more sense ?  

The electrostatic pattern formed by a neighborhood of tubulin dimers on a microtubule ( MT )  surface  shows highly negative charged regions surrounded by a less pronounced positive background, dependent on the MT lattice type . These electrostatic fingerprints are complementary to those formed by the 6 CaMKII holoenzyme kinase domains making the two natural substrates for interaction. Alignment of the CaMKII holoenzyme with tubulin dimers in the A-lattice MT arrangement yields converging electric field lines indicating a mutually attractive interaction.

So additionally to the precise interface matching significant association of the CaMKII holoenzyme with the MT through electrostatic forces indicates cumulative evidence of design.

there are 26 possible encoding states for a single CaMKII-MT interaction resulting in the storage of 64 bits of information. This case, however, only accounts for either α- or β-tubulin phosphorylation, not both. In the second scenario each tubulin dimer is considered to have three possible states – no phosphorylation (0), β-tubulin phosphorylation (1), or α-tubulin phosphorylation (2) (see Figure 5 B). These are ternary states, or ‘trits’ (rather than bits). Six possible sites on the A-lattice yield 36 = 729 possible states. The third scenario considers the 9-tubulin B-lattice neighborhood with ternary states. As in the previous scenarios the central dimer is not considered available for phosphorylation. In this case, 6 tubulin dimers out of 8 may be phosphorylated in three possible ways. The total number of possible states for the B lattice neighborhood is thus 36–28−8(27) = 5281 unique states.

So thirdly we have here a advanced encoding mechanism of information, which adds to the precise interface and electrostatic force interactions, which adds further cumulative evidence of design.

http://reasonandscience.heavenforum.org/t2181-cell-communication-and-signalling-evidence-of-design#4019

Motor proteins dynein and kinesin move  along microtubules (using ATP as fuel) to transport and deliver components and precursors to specific synaptic locations. While microtubules are assumed to function as passive guides, like railroad tracks for motor proteins, the guidance mechanism seems to be through CaMKII kinase enzymes which "write" on microtubules through phosphorylation and encode the way  to regulate motor protein transport along microtubules directly, and signal motor proteins precisely where and when to disengage from microtubules and deliver their cargo. There needs to be programming all the way along. Programming to make the specific enzymes, and how they have to operate.  That constitutes in my view another amazing argument for intelligent design. 
 
Back
Top