Showing posts sorted by relevance for query genetics. Sort by date Show all posts
Showing posts sorted by relevance for query genetics. Sort by date Show all posts

Friday, 15 October 2010

Evolution of genetic networks

Posted by Thomas in http://blog-msb.embo.org/blog/e/evolution_1/


A few days ago, an exciting review by Michael Lynch was published in Nature Reviews Genetics (The evolution of genetic networks by non-adaptive processes, Lynch 2007a ), a close follow-up of another review, published in PNAS a few months ago (The frailty of adaptive hypotheses for the origins of organismal complexity, Lynch 2007b). Michael Lynch has also written a book on the topic: The Origins of Genome Architecture (read a review)

The architecture of biological networks are often hypothesized as being "shaped" by adaptive evolution to confer global properties such as redundancy, robustness, modularity, complexity and evolvability. Lynch has some robust comments (others have some too, see Jonathan Eisen's "adaptationomics awards") on the “vast majority of biologists engaged in evolutionary studies [who] interpret virtually every aspect of biodiversity in adaptive terms” (Lynch 2007b). In contrast to what he perceives as a widespread belief, Lynch states clearly:

It is an open question as to whether pathway complexity is a necessary prerequisite for the evolution of complex phenotypes, or whether the genome architectures of multicellular species are simply more conducive to the passive emergence of network connections.(Lynch 2007a)

Beyond its somewhat controversial tone, Lynch's central lesson is the need to adopt a population genetics viewpoint (“nothing in evolution makes sense except in light of population genetics”) and he reminds us that, beside natural selection, three additional non-adaptive processes drive the evolution of living organisms: genetic drift, mutation and recombination. By analyzing the interplay between relative rates of loss and gain of regulatory sites (which depend both on mutation rate and mutational target size such as non-coding DNA), population size and recombination frequency, he demonstrates that purely non-adaptive forces can, in principle, determine the level of connectivity of regulatory networks--for example, determine the predominance of highly connected network motifs over linear pathways--without invoking any inherent advantages of the respective architectures on biological functions related for example to development or metabolism. It appears thus that, depending on the population genetics parameters, network structure can be profoundly "shaped" by the mere physical processes of mutation and recombination. At the very least, Lynch proposes that such models should be considered as "null hypothesis" when claiming that selection is engaged in a given aspect of organisms complexity.

In his review of Lynch's book, Massimo Pigliucci draws our attention to the fact that "the genome is only part of the story, arguably the simplest part to figure out", and that one of the greatest current challenges is to explain how phenotypes evolve. Lynch also recognizes that his models are simplified and do not, for example, consider kinetic or dynamical properties of biological networks. But here is a naive question: would it be possible to design an experimental strategy to test directly, in the lab, the evolution of simple (synthetic?) genetic circuits and observe the trends in connectivity under non-selective conditions or are the timescales involved too unrealistic?

Saturday, 16 October 2010

Origins of organismal complexity

The vast majority of biologists engaged in evolutionary studies interpret virtually every aspect of biodiversity in adaptive terms. This narrow view of evolution has become untenable in light of recent observations from genomic sequencing and population-genetic theory. Numerous aspects of genomic architecture, gene structure, and developmental pathways are difficult to explain without invoking the nonadaptive forces of genetic drift and mutation. In addition, emergent biological features such as complexity, modularity, and evolvability, all of which are current targets of considerable speculation, may be nothing more than indirect by-products of processes operating at lower levels of organization. These issues are examined in the context of the view that the origins of many aspects of biological diversity, from gene-structural embellishments to novelties at the phenotypic level, have roots in nonadaptive processes, with the population-genetic environment imposing strong directionality on the paths that are open to evolutionary exploitation.

(...)

Although the basic theoretical foundation for understanding the mechanisms of evolution, the field of population genetics, has long been in place, the central significance of this framework is still occasionally questioned, as exemplified in this quote from Carroll, “Since the Modern Synthesis, most expositions of the evolutionary process have focused on microevolutionary mechanisms. Millions of biology students have been taught the view (from population genetics) that ‘evolution is change in gene frequencies.’ Isn't that an inspiring theme? This view forces the explanation toward mathematics and abstract descriptions of genes, and away from butterflies and zebras…. The evolution of form is the main drama of life's story, both as found in the fossil record and in the diversity of living species. So, let's teach that story. Instead of ‘change in gene frequencies,’ let's try ‘evolution of form is change in development’.” Even ignoring the fact that most species are unicellular and differentiated mainly by metabolic features, this statement illustrates two fundamental misunderstandings. Evolutionary biology is not a story-telling exercise, and the goal of population genetics is not to be inspiring, but to be explanatory. The roots of this contention are fourfold.

First, evolution is a population-genetic process governed by four fundamental forces. Darwin articulated one of those forces, the process of natural selection, for which an elaborate theory in terms of genotype frequencies now exists. The remaining three evolutionary forces are nonadaptive in the sense that they are not a function of the fitness properties of individuals: mutation is the ultimate source of variation on which natural selection acts, recombination assorts variation within and among chromosomes, and genetic drift ensures that gene frequencies will deviate a bit from generation to generation independent of other forces. Given the century of work devoted to the study of evolution, it is reasonable to conclude that these four broad classes encompass all of the fundamental forces of evolution. From Michael Lynch. PNAS May 15, 2007 vol. 104 no. Suppl 1 8597-8604.

Tuesday, 2 November 2010

Top 7 genetics papers

A snapshot of the highest-ranked articles in genetics and related areas in the past 30 days

1. Mapping transcriptomes
While mapping every transcriptional start site and operon of Helicobacter pylori at single-nucleotide resolution, the authors identify novel small RNAs, reveal the widespread nature of antisense transcription, and unveil a new technique to investigate the genomic complexities of other important pathogens, such as Salmonella and Mycobacterium tuberculosis.

2. Epigenetics in mind
The body's tendency to silence the expression of one parental allele in favor of the other -- a phenomenon known as genomic imprinting -- is much more widespread in the brain than scientists have believed, according to a new genome-wide study in mice. Surprisingly, more than 1300 genes expressed in the mouse brain appear to exhibit "parent-of-origin" epigenetic effects.

3. Translation goes local
Protein synthesis is a complicated game, but for the first time researchers have shown direct interaction between a transmembrane receptor, called DCC, and the translational machinery in rodent neurons, a step that likely facilitates localized protein production.

4. No RNA "dark matter"?
Most of the DNA that's transcribed into RNA in fact codes for proteins, a finding that disputes previous studies that suggested that the majority of mammalian transcripts are non-coding "dark matter."

5. Super E. Coli
The mother cell of E. coli maintains a constant growth rate throughout its replicative life (hundreds of cell divisions), despite accumulating damage and an increased probability of death, suggesting that growth and aging are decoupled, unlike all other studied aging models.

6. How autophagosomes form
Under conditions of starvation, autophagosomes form to resupply the cell by bringing nutrients from the cytosol or other organelles to the lysosomes, ensuring the cell's survival. New findings reveal an essential ingredient to this mysterious process: the outer membrane of mitochondria.

7. New tumor targets?
A scan of 1800 megabases of DNA from 441 tumors reveals more than 2500 somatic mutations, providing the mutation "spectra" for cancers, including protein kinases and G-protein-coupled receptors, some of which may serve as druggable targets.

Source: The scientist

Sunday, 22 August 2010

Defining and measuring complexity

What's complexity? For example, is the human genome more complex than the yeast genome (see my post on August 8th, 2010)? We intuitively answer this question with a big "OF COURSE". However, it has been surprisingly difficult to come up with an universally accepted definition of complexity. Although there is not yet a single science of complexity but rather several different sciences of complexity with different views about what complexity really means, the history of science shows us that the lack of an universally accepted definition of a central term in a new scientific field is more common than not. As an example, the modern genetics still does not have a good definition of gene at the molecular level.

The physicist Seth Lloyd proposed in 2001 three different dimensions along which to measure the complexity of a system:

1) How hard is it to describe?

2) How hard is it to create?

3) What is its degree of organization?

Another interesting proposed measure of complexity is the Shannon entropy, defined as the average information or "amount of surprise" a message source has for a receiver. Thus, using a classical example of genetics, we could say that the sequence CGTGGT has more entropy than the sequence AAAAAA and, therefore is more complex than the latter one. A completely random sequence has the maximum possible entropy. That means we could well make up an artificial genome by choosing a bunch of random As, Cs, Ts, and Gs. Using entropy as the measure of complexity, this random, almost certainly nonfunctional genome would be considered more complex than the human genome.

In conclusion: the most complex entities are not the most ordered or random ones but somewhere in between.

For further reading, see "Complexity: a guided tour" by M. Mitchell.

Wednesday, 2 February 2011

Gene swap key to evolution

Horizontal gene transfer accounts for the majority of prokaryotic protein evolution

Microbes evolve predominantly by acquiring genes from other microbes, new research suggests, challenging previous theories that gene duplication is the primary driver of protein evolution in prokaryotes.

The finding, published in PLoS Genetics, could change the way scientists study and model biological networks and protein evolution.

"Even at a meeting last summer, there were those that thought that bacteria genomes expanded mostly through duplications and others that argued that it was due to gene acquisition," wrote Howard Ochman, an evolutionary biologist at Yale University who was not involved in the research, in an Email to The Scientist. "Now we all have a paper to point to that does a very good job of answering this question," he said. "Their conclusions are really robust."

Prokaryotes, including bacteria and archaea, thrive in diverse conditions thanks to their ability to rapidly modify their repertoire of proteins. This is achieved in two ways: by receiving genes from other prokaryotes, called horizontal gene transfer -- the nefarious way that bacteria acquire antibiotic resistance -- or by gene duplication, in which an existing gene is copied, taking on a new or enhanced function as mutations accumulate.

Past analyses using few, distantly related genomes estimated that horizontal gene transfer contributes to, at best, 25 percent of the expansion of protein families -- that is, the addition of proteins with novel functions or structures. But the recent availability of numerous, closely related prokaryotic genomes tempted Todd Treangen and Eduardo Rocha at the Institut Pasteur in Paris to more accurately test which biological process is the main driver of prokaryote protein evolution. "The genomic data was finally there to do a more in depth study," said Treangen, now a postdoc at the University of Maryland.

The duo analyzed 110 genomes of varying size from 8 clades of prokaryotes, focusing in on 3,190 defined protein families. The results were unambiguous: 80 to 90 percent of protein families had expanded through horizontal gene transfer. In addition, the researchers found that the two processes have different evolutionary roles: transferred genes persist longer in populations while duplicated genes are transient but more highly expressed.

"Overall, the role of gene transfer in protein diversification has been underestimated," said Treangen. Still, he noted, they analyzed only a tiny fraction of the microbes that exist in the world, and further research should be done as more genomes become available.

It would be nice to study the same two processes in eukaryotes, said Patrick Keeling, a molecular evolutionary biologist at the University of British Columbia who was not involved in the research. Yet despite numerous documented cases of horizontal gene transfer in eukaryotes, including plants, it would be hard to test because of the lack of genomic data from enough closely related eukaryotes (which have significantly larger, less manageable genomes than prokaryotes).

Still, "it raises some really fascinating questions about whether [eukaryotes] evolve in the same way," said Keeling.

Treangen, T.J. et al., "Horizontal Transfer, Not Duplication, Drives the Expansion of Protein Families in Prokaryotes," PLoS Genetics, 7:e1101284, 2011.

The Scientist

Friday, 11 February 2011

Common Disease: Are Causative Alleles Common or Rare?

Robert Shields from PLoS Biology

It has been said that a week is a long time in politics. But in human disease gene mapping, 10 years can seem a very short time indeed. It once seemed so simple: find a family with a number of affected individuals and narrow down regions of the genome shared by affected individuals but not their unaffected siblings. This process (family linkage analysis) was lengthy but had notable success with some diseases, including hereditary breast cancers caused by the BRCA1 and BRCA2 genes. Yet, many diseases known to have a genetic component (because they tend to run in families or siblings show a high concordance) do not follow a simple Mendelian pattern of inheritance and cannot be dissected in this way. Instead, researchers tried an “association” approach, starting with a large number of unrelated individuals, to find gene variants, or alleles, that are more common in affected than in unaffected controls. For such a strategy to work, the diseases must be influenced by variants that are quite common in the populations.

Some readers might recall the heated debates about the common disease, common variant (CD-CV) hypothesis. Using arguments based on population genetics (such as the rate of creation and purging of deleterious alleles, the genetic bottleneck in the human population and subsequent population expansion), the CD-CV hypothesis proposed that in common diseases with a genetic component, some predisposing alleles are relatively common and a combination of alleles or environmental effects was required before disease occurred, much like being dealt a bad hand from a common deck of cards. Under this hypothesis, disease-associated alleles might be found by using common gene variants, such as single nucleotide polymorphisms (SNPs), as a guide and comparing affected individuals with controls. Others cast doubt on this idea and suggested that common diseases are unlikely to be caused by common alleles and more likely to be caused by rarer ones; they too deployed arguments based on population genetics and suggested that association studies using common genetic variants might not be successful. As with all scientific debates, there seemed only one way out: collect the data and see. Well, 10 years and several millions of dollars later we have a lot of data, but are we any the wiser? Do we understand the allelic spectrum of disease any better than we did 10 years ago?

There have now been over 700 genome-wide association studies (GWAS) published linking many variants to over a hundred diseases. Many of these results are robust in that they can be replicated in several populations, leaving little doubt that common variants can contribute to common diseases. The problem is that the effect of these variants on disease is often rather modest, so that people with the disease-predisposing alleles are only slightly more likely to get the disease than those without. Larger and larger studies reveal more disease genes, usually with smaller and smaller effect on overall disease risk. The “missing heritability” problem then arises because, even in aggregate, these loci typically fall somewhat short of explaining the entire genetic component of disease risk. So where are the genes accounting for major predisposition to disease? One possible explanation is that GWAS do not directly reveal the disease-causative DNA variant, but rather a common DNA variant (usually an SNP) that is close enough to be genetically linked to it (almost always inherited together) and common enough to be on the genotyping microarrays. This has spurred more effort (and more expense) to find rarer and rarer SNPs by sequencing more genomes and make even larger arrays in the hope that the new SNPs may be in even closer linkage with the causative allele. Alternatively, it's possible that the disease-predisposing variants are not SNPs at all, but other changes in the genome, such as a duplicated or deleted gene or region—a so-called copy number variant (CNV)—or a result of epigenetic marks in the chromatin; neither of these would show up using the current generation of microarrays that look just at SNPs.

A paper published recently in PLoS Biology from the lab of David Goldstein put the cat amongst the CD-CV pigeons by suggesting that rather than common diseases being caused by common alleles, maybe rare alleles each with a large effect on disease might be creating “synthetic associations” in the GWAS signal by occurring, by chance, more often with one common allele than another. The paper used statistical reasoning to suggest that such synthetic associations are possible—but are they likely? Given how much time and money has been invested in surveying SNPs and attempting to match them up to diseases, the relative importance of such synthetic associations would have important implications on the direction of future research. The paper got a lot of publicity, even making the New York Times.

Now, some might say that no one likes the implication that they have been barking up the wrong scientific tree, still less perhaps that such a critique garnered a lot of publicity. But the issue is best settled by discussion—and data—which is why in this issue of PLoS Biology we publish two critiques of the original article together with a response from the original authors. The critiques argue that although rare variants could in theory create synthetic associations, this is not a likely explanation for the missing heritability. Perhaps with further advances in ever cheaper sequencing technologies and the ability to sequence whole genomes from affected individuals we will, before the next 10 years are up, finally have a better understanding of the missing pieces of the genetic causes of common disease.

Saturday, 19 January 2013

Anonymity Under Threat

Scientists uncover the identities of anonymous DNA donors using freely available web searches.

By Ruth Williams from The Scientist

A person donating their DNA sequence anonymously for research purposes may in fact be identified by a few simple web searches, according to a paper published today (January 17) in Science. But rather than trying to protect anonymity, some scientists believe efforts should instead be focused on educating DNA donors and on legislating against the misuse of sequence data.

“The paper is a nice example of how simple it is to re-identify de-identified samples and that the reliance on de-identification as the mechanism of ensuring privacy and avoiding misuse is one that is not viable,” said Nita Farahany, a professor of law and research at Duke University in Durham, North Carolina, who was not involved in the study.

Participants in public sequencing projects are told that their anonymity is not 100 percent guaranteed, but the risk of a person’s identity being discovered was perceived to be miniscule, explained Yaniv Erlich, a computational geneticist at the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, who led the study. However, a 2005 Washington Post article about a teenage boy who tracked-down his biological sperm-donor father via online genealogy searches suggested the risk may be significant. According to the article, the boy had submitted a sample of his own DNA to a genealogy service that used repeat sequences from his Y-chromosome to search their sequence databases for related males. Although the search did not uncover his father directly, it did find weak matches to two men who importantly shared a surname. Along with his father’s place and date of birth—information released to the mother—the likely surname enabled the boy to find and contact his father.

“We heard about this story and we thought, wow, this could be a threat for [the privacy of] personal genomes,” said Erlich.

To see how easy it might be to discover the identity of DNA donors, his team built software for retrieving Y-chromosome repeat information from whole genome sequences. With those repeat sequences, they could perform genealogy searches. “We thought, cool, let’s try it on the genome of Craig Venter,” said Erlich. “And it worked!”

They searched the available genealogical sequence database at Ysearch.org and, sure enough, the strongest match by far was to someone named Venter from Lincolnshire in England. The surname, together with Craig Venter’s known age and state of residence—two pieces of information commonly accompanying anonymous genome sequences—were then used to search the online public record, USsearch.com. The search came up with just two possible people, and one was Craig Venter.

Taking the experiment further, Erlich and his colleagues used their software to retrieve Y chromosome information from the anonymous DNA sequences of male participants in a public sequencing project and showed that, using the same methods, they could accurately determine the identities of multiple individuals. They could even identify anonymous women donors related to the males, by virtue of family tree data accompanying the genome sequences and the ability to search online public records. The important point, said Erlich, is that “everything was publically available. We didn’t break into any database. We didn’t need any special passwords.”

Although, the authors find the probability of discovering someone’s identity is still low, the study raises the question of whether more should be done to protect donors’ anonymity. But George Church, professor of genetics at Harvard Medical School, who was not involved in the study, thinks there is little point. “You can keep trying to adjust the protocols” - information about participants’ ages might be kept private, for example-“but that’s kind of putting a bandage on it.... It’s only going to get easier to re-identify [anonymous sequences], not harder,” he said. Although the Genetic Information Nondiscrimination Act in the United States prohibits employers and health insurance companies from discriminating on the basis of genetic information, “there is still a fear of the unknown,” said Brad Malin, a professor of biomedical informatics and computer science at Vanderbilt University in Nashville, Tennessee, who is worried that the study will frighten away members of the public from participating in genome sequencing projects. “It is important to highlight these problems, but at the same time, when you highlight them it is very difficult to temper the result,” he said.

Farahany agreed. “What we need to do is better educate people about the facts,” she said. Furthermore, she added, efforts might be better spent on regulating the use of sequence data, rather than ensuring anonymity. “That’s where we should focus our legal and ethical analyses,” she said - “not on trying to prevent the flow of information, but on trying to prevent the misuse of information.”

 M. Gymrek et al., “Identifying Personal Genomes by Surname Inference,” Science, 339: 321-324, 2013.

Saturday, 19 November 2011

The Human Genome Project, Then and Now

An early advocate of the sequencing of the human genome reflects on his own predictions from 1986.

By Walter F. Bodmer

In The Scientist’s first issue, Walter Bodmer, then Research Director at the Imperial Cancer Research Fund Laboratories in London, and later the second president of the Human Genome Organisation, wrote an opinion about the potential of a Human Genome Project (HGP). Now, more than a decade after the first draft genome was published, he reflects on the accuracy of those 1986 predictions.

In 1986 Bodmer predicted: the human genome would allow the characterization of ″…10,000 or so basic genetic functions…″
In 2011 Bodmer says: “The ’10,000 or so basic genetic functions’ were not to be equated to genes, but to clusters of genes with related functions, and was not far off the mark. Now, however, we know that multiple splice products and considerable numbers of nonprotein coding, yet functional, sequences greatly extend the potential complexity of the human genome beyond the bare count of some 20,000–25,000 genes.”

1986: ″Given a knowledge of the complete human gene sequence, there is no limit to the possibilities for analyzing and understanding…essentially all the major human chronic diseases…″
2011: “Now, with next-generation sequencing, one can even identify a mutant gene in a single appropriate family.”

1986: “The project will provide information of enormous interest for unraveling of the evolutionary relationships between gene products within and between species, and will reveal the control language for complex patterns of differential gene expression during development and differentiation.”
2011: “This has been achieved because, as expected, the HGP generated a huge amount of information on other genomes. However, only now is the challenge of the genetics of normal variation, including, for example, in facial features which are clearly almost entirely genetically determined, being met. Next, perhaps, will come the objective genetic analysis of human behavior.”

1986: ″The major challenge is to coordinate activities of scientists working in this field worldwide.″
2011: “Collaboration has proved to be fundamental to the success of the Human Genome Project, which set the stage for global cooperation and, most importantly, open exchange and availability of new DNA sequences, and, more generally, large data bases of information.”

1986: “The project will include…development of approaches for handling large genetic databases.”
2011: “As predicted, there have been major developments in the ability to handle large databases. Accompanying this have been new approaches to data analysis using, for example, Markov Chain Monte Carlo simulations that are hugely computer intensive.

The most surprising, and certainly not predicted development, has been the extraordinary rate of advance first in the techniques for large scale automated genotyping, then in whole genome mRNA analysis and finally in DNA sequencing where the rate of reduction in cost and increase in speed of DNA sequencing has exceeded all expectations and has even exceeded the rate of developments in computing. This has, for example, made population based whole genome sequencing more or less a reality.

Perhaps one of the greatest future technological challenges will be to apply these techniques to single cells and to achieve a comparable level of sophistication of cellular analysis, to that we now have for working with DNA, RNA, and proteins.”

1986: “We should call it Project 2000.”
2011: “This prediction also came to pass, with a little political license. I am referring to the fact that Bill Clinton and Tony Blair announced the completion of the project on 26th June 2000, while the first proper publication of a (very) rough draft was not till February 2001 and it took several more years until a fully reliable complete sequence became available. The public announcement was no doubt politically motivated with, I am sure, support from the scientists, to promote what was being done and have something to say at the beginning of the new millennium. An intriguing interaction between science and politics, that in contrast for example to the Lysenko episode in the Soviet union, was quite harmless and may even have been of benefit.

Sunday, 12 December 2010

The Great DNA Data Deficit: Are Genes for Disease a Mirage?

Jonathan Latham and Allison Wilson

Just before his appointment as head of the US National Institutes of Health (NIH), Francis Collins, the most prominent medical geneticist of our time, had his own genome scanned for disease susceptibility genes. He had decided, so he said, that the technology of personalised genomics was finally mature enough to yield meaningful results. Indeed, the outcome of his scan inspired The Language of Life, his recent book which urges every individual to do the same and secure their place on the personalised genomics bandwagon.

So, what knowledge did Collins’s scan produce? His results can be summarised very briefly. For North American males the probability of developing type 2 diabetes is 23%. Collins’s own risk was estimated at 29% and he highlighted this as the outstanding finding. For all other common diseases, however, including stroke, cancer, heart disease, and dementia, Collins’s likelihood of contracting them was average.

Predicting disease probability to within a percentage point might seem like a major scientific achievement. From the perspective of a professional geneticist, however, there is an obvious problem with these results. The hoped-for outcome is to detect genes that cause personal risk to deviate from the average. Otherwise, a genetic scan or even a whole genome sequence is showing nothing that wasn’t already known. The real story, therefore, of Collins’s personal genome scan is not its success, but rather its failure to reveal meaningful information about his long-term medical prospects. Moreover, Collins’s genome is unlikely to be an aberration. Contrary to expectations, the latest genetic research indicates that almost everyone’s genome will be similarly unrevealing.

We must assume that, as a geneticist as well as head of NIH, Francis Collins is more aware of this than anyone, but if so, he wrote The Language of Life not out of raw enthusiasm but because the genetics revolution (and not just personalised genomics) is in big trouble. He knows it is going to need all the boosters it can get.

What has changed scientifically in the last three years is the accumulating inability of a new whole-genome scanning technique (called Genome-Wide Association studies; GWAs) to find important genes for disease in human populations. In study after study, applying GWAs to every common (non-infectious) physical disease and mental disorder, the results have been remarkably consistent: only genes with very minor effects have been uncovered. In other words, the genetic variation confidently expected by medical geneticists to explain common diseases, cannot be found.

Read more

Sunday, 4 July 2010

What genes can't do

My second post comes from an article by Richard Levontin, with his broad humanistic perspective of science. It's about the dream of the human genome, and its promises. He wrote:

"Daniel Koshland, the editor of Science, when asked why the Human Genome Project funds should not be given to the homeless, answered, "What these people don't realize is that the homeless are impaired... Indeed, no group will benefit more from the application of human genetics""

Unfortunately, this deterministic ideology is the basis of most genomics research made nowadays. This ontological claim, of the dominance of DNA over all aspects of life, has key social and political consequences.

Monday, 11 October 2010

Genetics and the human nature

It's sometimes said that the genes determine the limits up to which, but not beyond which, a person's development may advance. This only confuses the issue. There is no way to predict all the phenotypes that a given genotype might yield in every one of the infinity of possible environments. Environments are infinitely diversified, and in the future there will exist environments that do not exist now. (...) Heredity cannot be called the "dice of destiny". Variations in body build, in physiology, and in mental traits are in part genetically conditioned, but this does not make education and social improvements any less well founded. What genetic conditioning does mean is that there is no single human nature, only human natures with different requirements for optimal growth and self-realization. The evidence of genetic conditioning of human traits, especially mental traits, must be examined with the greatest care. Theodosius Dobzhansky - Mankind evolving (1962).