Matthew C. Good et al.
From Science
Mammalian cells contain an estimated 1 billion individual protein molecules, with as many as 10% of these involved in signal transduction. Given this enormous number of molecules, it seems remarkable that cells can accurately process the vast array of signaling information they constantly receive. How can signaling proteins find their correct partners - and avoid their incorrect partners - among so many other proteins?
A principle that has emerged over the past two decades is that cells achieve specificity in their molecular signaling networks by organizing discrete subsets of proteins in space and time. For example, functionally interacting signaling components can be sequestered into specific subcellular compartments (e.g., organelles) or at the plasma membrane. Another solution is to assemble functionally interacting proteins into specific complexes. More than 15 years ago, the first scaffold proteins were discovered—proteins that coordinate the physical assembly of components of a signaling pathway or network. These proteins have captured the attention of the signaling field because they appear to provide a simple and elegant solution for determining the specificity of information flow in intracellular networks.
Scaffold Proteins: Versatile Tools to Assemble Diverse Pathways
Scaffolds are extremely diverse proteins,many of which are likely to have evolved independently. Nonetheless they are conceptually related, in that they are usually composed of multiple modular interaction domains or motifs. Their exact domain composition and order, however, can vary widely depending on the pathways that they organize. In some cases, homologous individual interaction motifs can be found in scaffolds associated with particular signaling proteins. For example, the AKAPs (A- kinase anchoring proteins),which link protein kinase A (PKA) to diverse signaling processes, all share a common short peptide motif that binds to the regulatory subunit of PKA. However, the other domains in individual AKAPs are highly variable, depending on what inputs and outputs the scaffold protein coordinates with PKA. Thus, scaffold proteins are flexible platforms assembled through mixing and matching of interaction domains.
Scaffold proteins function in a diverse array of biological processes. Simple mechanisms (such as tethering) are layered with more sophisticated mechanisms (such as allosteric control) so that scaffolds can precisely control the specificity and dynamics of information transfer. Scaffold proteins can also control the wiring of more complex network configurations—they can integrate feedback loops and regulatory controls to generate precisely controlled signaling behaviors. The versatility of scaffold proteins comes from their modularity, which allows recombination of protein interaction domains to generate new signaling pathways. Cells use scaffolds to diversify signaling behaviors and to evolve new responses. Pathogens can create scaffold proteins that are to their advantage: Their virulence depends on rewiring host signaling pathways to turn off or avoid host defenses. In the lab, scaffolds are being used to build new, predictable signaling or metabolic networks to program useful cellular behaviors.
Tuesday, 24 May 2011
Monday, 16 May 2011
Multitasking Drugs
Paula A. Kiberstis
The escalating cost of developing new drugs has reinvigorated interest in “drug repositioning,” the idea that a drug with a good track record for clinical safety and efficacy in treating one disease might have broader clinical applications, some of which would not easily be predicted from the drug's mechanism of action. This concept is illustrated by two recent studies that propose that drugs developed for cardiovascular disease might offer beneficial effects in the setting of prostate cancer.
Farwell et al. suggest that statins (cholesterol-lowering drugs) merit serious consideration as a possible preventive strategy for prostate cancer. Building on earlier work on this topic, they found in a study of medical files of over 55,000 men that those who had been prescribed statins were 31% less likely to be diagnosed with prostate cancer than those who had been prescribed another type of medication (antihypertensives). In independent work, Platz et al. screened for agents that inhibit the growth of prostate cancer cells and found that one of the most effective was digoxin, a drug used to treat heart failure and arrhythmia. A complementary epidemiological analysis of about 48,000 men revealed that digoxin use was associated with a 25% lower risk of prostate cancer, leading the authors to suggest that this drug be further studied as a possible therapeutic for the disease.
J. Natl. Cancer Inst. 103, 1 (2011); Cancer Discovery 1, OF66 (2011).
The escalating cost of developing new drugs has reinvigorated interest in “drug repositioning,” the idea that a drug with a good track record for clinical safety and efficacy in treating one disease might have broader clinical applications, some of which would not easily be predicted from the drug's mechanism of action. This concept is illustrated by two recent studies that propose that drugs developed for cardiovascular disease might offer beneficial effects in the setting of prostate cancer.
Farwell et al. suggest that statins (cholesterol-lowering drugs) merit serious consideration as a possible preventive strategy for prostate cancer. Building on earlier work on this topic, they found in a study of medical files of over 55,000 men that those who had been prescribed statins were 31% less likely to be diagnosed with prostate cancer than those who had been prescribed another type of medication (antihypertensives). In independent work, Platz et al. screened for agents that inhibit the growth of prostate cancer cells and found that one of the most effective was digoxin, a drug used to treat heart failure and arrhythmia. A complementary epidemiological analysis of about 48,000 men revealed that digoxin use was associated with a 25% lower risk of prostate cancer, leading the authors to suggest that this drug be further studied as a possible therapeutic for the disease.
J. Natl. Cancer Inst. 103, 1 (2011); Cancer Discovery 1, OF66 (2011).
Saturday, 14 May 2011
Complex networks: Degrees of control
Magnus Egerstedt
Networks can be found all around us. Examples include social networks (both online and offline), mobile sensor networks and gene regulatory networks. Such constructs can be represented by nodes and by edges (connections) between the nodes. The nodes are individual decision makers, for instance people on the social-networking website Facebook or DNA segments in a cell. The edges are the means by which information flows and is shared between nodes. But how hard is it to control the behaviour of such complex networks?
The flow of information in a network is what enables the nodes to make decisions or to update internal states or beliefs — for example, an individual's political affiliation or the proteins being expressed in a cell. The result is a dynamic network, in which the nodes' states evolve over time. The overall behaviour of such a dynamic network depends on several factors: how the nodes make their decisions and update their states; what information is shared between the edges; and what the network itself looks like — that is, which nodes are connected by edges.
Imagine that you want to start a trend by influencing certain individuals in a social network, or that you want to propagate a drug through a biological system by injecting the drug at particular locations. Two obvious questions are: which nodes should you pick, and how effective are these nodes when it comes to achieving the desired overall behaviour? If the only important factor is the overall spread of information, these questions are related to the question of finding and characterizing effective decision-makers. However, the nodes' dynamics (how information is used for updating the internal states) and the information flow (what information is actually shared) must also be taken into account.
Central to the question of how information, injected at certain key locations, can be used to steer the overall system towards some desired performance is the notion of controllability — a measure of what states can be achieved from a given set of initial states. Different dynamical systems have different levels of controllability. For example, a car without a steering wheel cannot reach the same set of states as a car with one, and, as a consequence, is less controllable.
Researchers found that, for several types of network, controllability is connected to a network's underlying structure. They identified what driver nodes — those into which control inputs are injected — can direct the network to a given behaviour. The surprising result is that driver nodes tend to avoid the network hubs. In other words, centrally located nodes are not necessarily the best ones for influencing a network's performance. So for social networks, for example, the most influential members may not be those with the most friends.
The result of this type of analysis is that it is possible to determine how many driver nodes are needed for complete control over a network. Scientists do this for several real networks, including gene regulatory networks for controlling cellular processes, large-scale data networks such as the World Wide Web, and social networks. We have a certain intuition about how hard it might be to control such networks. For instance, one would expect cellular processes to be designed to make them amenable to control so that they can respond swiftly to external stimuli, whereas one would expect social networks to be more likely to resist being controlled by a small number of driver nodes.
It turns out that this intuition is entirely wrong. Social networks are much easier to control than biological regulatory networks, in the sense that fewer driver nodes are needed to fully control them — that is, to take the networks from a given configuration to any desired configuration. Studies find that, to fully control a gene regulatory network, roughly 80% of the nodes should be driver nodes. By contrast, for some social networks only 20% of the nodes are required to be driver nodes. What's more, the authors show that engineered networks such as power grids and electronic circuits are overall much easier to control than social networks and those involving gene regulation. This is due to both the increased density of the interconnections (edges) and the homogeneous nature of the network structure.
These startling findings significantly further our understanding of the fundamental properties of complex networks. One implication of the study is that both social networks and naturally occurring networks, such as those involving gene regulation, are surprisingly hard to control. To a certain extent this is reassuring, because it means that such networks are fairly immune to hostile takeovers: a large fraction of the network's nodes must be directly controlled for the whole of it to change. By contrast, engineered networks are generally much easier to control, which may or may not be a good thing, depending on who is trying to control the network.
Read more
Networks can be found all around us. Examples include social networks (both online and offline), mobile sensor networks and gene regulatory networks. Such constructs can be represented by nodes and by edges (connections) between the nodes. The nodes are individual decision makers, for instance people on the social-networking website Facebook or DNA segments in a cell. The edges are the means by which information flows and is shared between nodes. But how hard is it to control the behaviour of such complex networks?
The flow of information in a network is what enables the nodes to make decisions or to update internal states or beliefs — for example, an individual's political affiliation or the proteins being expressed in a cell. The result is a dynamic network, in which the nodes' states evolve over time. The overall behaviour of such a dynamic network depends on several factors: how the nodes make their decisions and update their states; what information is shared between the edges; and what the network itself looks like — that is, which nodes are connected by edges.
Imagine that you want to start a trend by influencing certain individuals in a social network, or that you want to propagate a drug through a biological system by injecting the drug at particular locations. Two obvious questions are: which nodes should you pick, and how effective are these nodes when it comes to achieving the desired overall behaviour? If the only important factor is the overall spread of information, these questions are related to the question of finding and characterizing effective decision-makers. However, the nodes' dynamics (how information is used for updating the internal states) and the information flow (what information is actually shared) must also be taken into account.
Central to the question of how information, injected at certain key locations, can be used to steer the overall system towards some desired performance is the notion of controllability — a measure of what states can be achieved from a given set of initial states. Different dynamical systems have different levels of controllability. For example, a car without a steering wheel cannot reach the same set of states as a car with one, and, as a consequence, is less controllable.
Researchers found that, for several types of network, controllability is connected to a network's underlying structure. They identified what driver nodes — those into which control inputs are injected — can direct the network to a given behaviour. The surprising result is that driver nodes tend to avoid the network hubs. In other words, centrally located nodes are not necessarily the best ones for influencing a network's performance. So for social networks, for example, the most influential members may not be those with the most friends.
The result of this type of analysis is that it is possible to determine how many driver nodes are needed for complete control over a network. Scientists do this for several real networks, including gene regulatory networks for controlling cellular processes, large-scale data networks such as the World Wide Web, and social networks. We have a certain intuition about how hard it might be to control such networks. For instance, one would expect cellular processes to be designed to make them amenable to control so that they can respond swiftly to external stimuli, whereas one would expect social networks to be more likely to resist being controlled by a small number of driver nodes.
It turns out that this intuition is entirely wrong. Social networks are much easier to control than biological regulatory networks, in the sense that fewer driver nodes are needed to fully control them — that is, to take the networks from a given configuration to any desired configuration. Studies find that, to fully control a gene regulatory network, roughly 80% of the nodes should be driver nodes. By contrast, for some social networks only 20% of the nodes are required to be driver nodes. What's more, the authors show that engineered networks such as power grids and electronic circuits are overall much easier to control than social networks and those involving gene regulation. This is due to both the increased density of the interconnections (edges) and the homogeneous nature of the network structure.
These startling findings significantly further our understanding of the fundamental properties of complex networks. One implication of the study is that both social networks and naturally occurring networks, such as those involving gene regulation, are surprisingly hard to control. To a certain extent this is reassuring, because it means that such networks are fairly immune to hostile takeovers: a large fraction of the network's nodes must be directly controlled for the whole of it to change. By contrast, engineered networks are generally much easier to control, which may or may not be a good thing, depending on who is trying to control the network.
Read more
Thursday, 5 May 2011
If Bacteria Can Do It…
Learning community skills from microbes
By H. Steven Wiley
Numerically and by biomass, bacteria are the most successful organisms on Earth. Much of this success is due to their small size and relative simplicity, which allows for fast reproduction and correspondingly rapid evolution. But the price of small size and rapid growth is having a small genome, which constrains the diversity of metabolic functions that a single microbe can have. Thus, bacteria tend to be specialized for using just a few substrates. So how can simple bacteria thrive in a complex environment? By cooperating—a cooperation driven by need.
Bacteria rarely live in a given ecological niche by themselves. Instead, they exist in communities in which one bacterial species generates as waste the substrates another species needs to survive. Their waste products are used, in turn, by other bacterial species in a complex food chain. Survival requires balancing the needs of the individual with the well-being of the group, both within and across species. How this balancing act is orchestrated can be fascinating to explore as the relative roles of cooperation, opportunism, parasitism and competition change with alterations in available resources.
The dynamics of microbial behavior are not just a great demonstration of how the laws of natural selection work and how they depend on the nature of both selective pressures and environmental constraints. Microbial communities also demonstrate important nongenetic principles of cooperation. And herein lie lessons that scientists can emulate.
To be successful, scientists must be able to compete not only for funding, but for important research topics that will give them visibility and attract good students. In the earlier days of biology, questions were more general, making it easier to keep up with broad fields and to exploit novel research findings as they arose. As the nature of our work has become more complex and the amount of biological information has exploded, we have necessarily become more specialized. There is only so much information each of us can handle.
With specialization has come an increasing dependence on other specialized biologists to provide us with needed data and to support our submitted papers and grants. At the same time, resources have become scarcer, and we find ourselves competing with the same scientists on whom we are becoming dependent. Thus, it is necessary to find a balance between cooperation and competition in order to survive, and perhaps even to thrive.
The composition of microbial communities is driven by both the interaction of different species and external environmental factors that determine resource availability. Scientists want to learn the rules governing these complex relationships so they can reengineer bacterial communities for the production of useful substances, or for bioremediation. Perhaps as we learn the optimal strategies that microbial communities use to work together effectively, we will gain insights into how we can better work together as a community of scientists.
The Scientist
By H. Steven Wiley
Numerically and by biomass, bacteria are the most successful organisms on Earth. Much of this success is due to their small size and relative simplicity, which allows for fast reproduction and correspondingly rapid evolution. But the price of small size and rapid growth is having a small genome, which constrains the diversity of metabolic functions that a single microbe can have. Thus, bacteria tend to be specialized for using just a few substrates. So how can simple bacteria thrive in a complex environment? By cooperating—a cooperation driven by need.
Bacteria rarely live in a given ecological niche by themselves. Instead, they exist in communities in which one bacterial species generates as waste the substrates another species needs to survive. Their waste products are used, in turn, by other bacterial species in a complex food chain. Survival requires balancing the needs of the individual with the well-being of the group, both within and across species. How this balancing act is orchestrated can be fascinating to explore as the relative roles of cooperation, opportunism, parasitism and competition change with alterations in available resources.
The dynamics of microbial behavior are not just a great demonstration of how the laws of natural selection work and how they depend on the nature of both selective pressures and environmental constraints. Microbial communities also demonstrate important nongenetic principles of cooperation. And herein lie lessons that scientists can emulate.
To be successful, scientists must be able to compete not only for funding, but for important research topics that will give them visibility and attract good students. In the earlier days of biology, questions were more general, making it easier to keep up with broad fields and to exploit novel research findings as they arose. As the nature of our work has become more complex and the amount of biological information has exploded, we have necessarily become more specialized. There is only so much information each of us can handle.
With specialization has come an increasing dependence on other specialized biologists to provide us with needed data and to support our submitted papers and grants. At the same time, resources have become scarcer, and we find ourselves competing with the same scientists on whom we are becoming dependent. Thus, it is necessary to find a balance between cooperation and competition in order to survive, and perhaps even to thrive.
The composition of microbial communities is driven by both the interaction of different species and external environmental factors that determine resource availability. Scientists want to learn the rules governing these complex relationships so they can reengineer bacterial communities for the production of useful substances, or for bioremediation. Perhaps as we learn the optimal strategies that microbial communities use to work together effectively, we will gain insights into how we can better work together as a community of scientists.
The Scientist
Deterministic genes
The belief that genes are somehow super-deterministic, in comparison with environmental causes, is a myth of extraordinary tenacity, and it can give rise to real emotional distress. I was only dimly aware of this until it was movingly brought home to me in a question session at a meeting of the AAAS. A young woman asked the lecturer whether there was any evidence for genetic sex differences in human psychology. The woman seemed to set great store by the answer and was almost in tears. Something or somebody has misled her into thinking that genetic determination is for keeps; she seriously believed that a "yes" answer to her question would, if correct, condemn her as a female individual to a life of feminine pusuits, chained to the nursery and the kitchen sink. But if [unlike most of us?] she is a determinist in that strong Calvinistic sense, she should be equally upset whether the causal factors concerned are genetic or "environmental".
From "The extended phenotype" by Richard Dawkins
From "The extended phenotype" by Richard Dawkins
Subscribe to:
Posts (Atom)