Tuesday, October 31, 2006

Women's Health Study

Last week's (Oct. 24th) Wall Street Journal has an informative article entitled "Tying Diseases to DNA in Thousands of Women" (subscription needed). The story is about the collaborative project, between a team of academic, government and industry researchers, who are examining the genetic causes of breast cancer, heart disease and other disorders affecting women. Here is an excerpt from the article:

The novel collaboration, which involves Brigham and Women's Hospital in Boston, the National Institutes of Health and the biotechnology giant Amgen Inc., will analyze DNA collected more than a decade ago from 28,000 participants in the Women's Health Study, combing it for differences between those who have developed serious illness and those who have remained healthy. Researchers believe the results will eventually enable physicians to better predict a woman's risk for disease and to tailor more effective treatments.

....The Women's Health study is one of several federally funded research projects for which participants provided blood samples years ago that now constitute a potentially rich resource to match genetic variations with disease on a large scale, a new approach called a "whole-genome association study."

Because huge amounts of data are involved -- the new initiative will look at 317,000 different variations from each person's DNA -- the cost of scanning the genes significantly exceeds what the NIH and academic institutions can afford. That has prompted academic scientists to seek support from private industry.

....Under the terms of the new project, called the Women's Health Genome Study, results describing associations between genes and diseases will be posted in a public database through the NIH and made available to any scientist interested in conducting further research. "The overwhelming primary aim is to get the vast amount of these data in the public domain as quickly as possible so they can immediately advance patient care," Dr. Ridker says.

Once in the public domain, the genetic associations can't be patented, but scientists could use them as a springboard to research how to develop diagnostic tests and treatments that could be patented. "The purpose of this is to make a contribution to the global knowledge base," says Joseph P. Miletich, Amgen's senior vice president, research and development. "Amgen isn't looking for any rights to anything."



Thursday, October 26, 2006

Human Brain Map etc.

The BBC has an excellent interactive website on the human brain here. You can explore the brain by both structure and function.

The brain map section is only one part of the BBC Human Mind and Body site, which is an excellent resource for learning more about us. For example, on this page you can take a number of Surveys and Psychology Tests. You can explore your memory, personality, and face perception. And over on the body section you can play the organs game or the skeleton game. And, for the truly ambitious, there is the nervous system game! It's a great educational resource.


Tuesday, October 24, 2006

The Genetics of Health

Conventional wisdom suggests that the prospect of developing effective therapies for disease lies with advances in disease genetics. That is, the more we learn about the role certain genes play in the development of disease the better positioned we will be for developing therapeutic interventions that prevent or possibly even cure the wide range of diseases that humans are susceptible to.

Well, the title of my blog suggests that we should be always be willing to question conventional wisdom. And so it was with much interest that I read an interesting article in this month's issue of Nature Genetics (subscription needed) that challenges this conventional wisdom in genetics. In "The Genetics of Health" Joseph Nadeau and Eric Topol argue that the current emphasis on the discovery of disease genes means that the genetics of health is often neglected. They argue that "occurrence of long-lived healthy individuals, despite presence of genetic and environmental risks, raises the possibility that naturally occurring modifier genes and protective alleles maintain health. These 'healthy' genes are a powerful alternative approach for discovering drugs to treat and perhaps prevent disease effectively and safely".

Here are a few excerpts from this interesting article (along with some important definitions):

Research programs in human genetics typically start with a collection of individuals affected with a particular disease. With identification of susceptibility genes, pathogenic mechanisms are studied to discover treatment modalities. Tests are then undertaken to determine whether these modalities restore health in an effective and safe manner. This logical but circuitous path has not yet been as successful as hoped. Development of new drugs is risky because candidates too often show limited effectiveness and adverse side effects. The key problem is designing exogenous agents that interact with endogenous molecules to restore health to dysfunctional biological systems.

....An alternative approach based on the genetics of health provides exciting opportunities to accelerate discovery of genetically based modalities to suppress disease. The tendency for health to persist despite the presence of susceptibility genes has several explanations, including modifier genes and protective alleles that confer genetic resistance to disease1. Resistance genes provide insight into homeostasis that maintains health, whereas susceptibility genes provide insight into the pathogenic mechanisms that lead to dysfunction and disease. A remarkable but often neglected observation in many families and populations is the occurrence of elderly individuals who inherit disease genes but who nevertheless remain healthy. Elderly at-risk but unaffected individuals may be nature's signal for a solution to the problem of inherited disease. Although the strongest evidence involves single-gene disorders in which unaffected individuals are readily identified, several recent studies report evidence for modifiers in multigenic models of human disease. In this Commentary, we review the relevant characteristics of modifier genes and protective alleles, discuss the attributes that make them compelling candidates as therapeutic targets and propose initiatives to test the feasibility of the 'healthy gene' paradigm.

[Some useful definitions from the article]

Modifier genes. Modifiers are variants of one gene that modulate the phenotypic expression of another ('target') gene. Perhaps the most important generalization from studies of genetically engineered mice is that their phenotypic expression depends heavily on modifier genes in the genetic background. A classic example of a modifier effect involves a genetic variant on chromosome 7 in mice and 19q in humans that controls the association of meconium ileus with cystic fibrosis, which results from mutations in the CFTR gene (on 7q31 in humans, with a homolog on chromosome 6 in mice). The variable association of this serious complication with cystic fibrosis is therefore controlled by a genetic variant that is unlinked to the CFTR target gene.

Protective alleles. Three classes of variants can occur at a given gene: an allele might increase disease risk in carriers (a susceptibility allele), another might neither increase nor reduce risk (a neutral allele) and finally an allele might reduce risk (a protective allele). A classic example of these three allelic classes involves the APOE gene that modulates Alzheimer disease susceptibility. The APOE4 variant allele increases risk for Alzheimer disease and associated comorbidities, whereas the APOE2 variant is protective and APOE3 neutral.



Monday, October 23, 2006

Egoism and Community

Every fortnight during term the CSSJ runs lunchtime sessions where we focus on a member’s “work-in-progress”. Today was my turn and I suggested a recent paper I wrote entitled “Towards an Economic Theory of Community”.

The initial catalyst for the paper was this essay contest. The paper tries to bridge two distinct (though related) debates- one in normative ethics, the other in political theory.

The debate in normative ethics concerns prudential versus non-prudential (esp. Kantian) accounts of morality. For proponents of the prudential account of the authority of morality, morality is rationally justified. David Gauthier (1986), for example, argued that rational utility maximizers would accept moral constraints.

David Brink has recently advanced a novel prudential account of morality, one that bases morality on a particular account of self-interest- eudemonia. Human flourishing, on this account of self-interest, involves exercising our practical reason (deliberative capacities). Brink draws upon a rich tradition of thinkers, including Plato, Aristotle and T. H. Green.

A few years ago I published a critique of metaphysical egoism, but I think the challenges I raised against Brink are not insurmountable. Indeed, I now think it is important to invest more thought in defending (rather than critiquing) the theory. Hence why I have been moved to write this new paper.

The second debate the paper is meant to address is the long-standing debate in political theory (that dominated much of the literature in the 1980’s and early 1990’s) between liberals and communitarians. One of the distinctive features of a communitarian political theory is the conception of the self it embodies. This conception of the self is often referred to as a “social” or “embedded” conception and it can be contrasted with the “unencumbered” conception of the self endorsed by contemporary liberalism. A social conception of the self recognizes the fact that humans are social beings and as such they are embedded in a web of social networks (e.g. family, neighbourhood, national identity, etc.) that shape their constitutive ends. This interconnected web of social relations is ignored by atomistic social theories that envision persons as rational utility maximizers placed behind a veil of ignorance or in a prisoner’s dilemma.

In this paper I argue that the emphasis on our social nature is only one component of a defensible communitarian conception of the self. The social conception of the person must be supplemented and integrated with the *temporal* conception of the self. As human beings we are temporal beings with finite time and abilities (e.g. knowledge, empathy, etc.). Taking the temporal and finite nature of human beings seriously is important for determining what kinds of community are viable and what kinds of things we can do to foster and cultivate different communal aspirations.

To take the temporal and finite nature of human beings we must develop an economic theory of community. That is, one that explains the communal ties and affinities of a particular person by the costs and benefits of communal membership. This is where I think metaphysical egoism can be very useful to communitarians.

By developing an economic theory of community that is based on the Greek eudaimonist tradition one can make sense of the different commitments temporal and finite individuals have to different kinds of communities at different stages of their lives. And this should prove invaluable in terms of considering the wide range of possible strategies available for fostering different kinds of community, whether it be within the family, local neighbourhoods, a country or globally. More specifically, the economic theory of community emphasizes the need for individuals to be psychologically connected and continuous with others. And there are a variety of different ways to foster these connections- ranging from having a spouse and children to reading novels and caring for a pet or one’s garden.

At some future stage I will probably post a few things related to the argument I advance in the paper. For example, how it inspires a moralized (though not overtly moralized) account of deliberative democracy. And how it invokes a plausible (and attractive) interpersonal discount rate that occupies the mean between selfishness and complete altruism. The latter is so because the scope and stringency of other-regarding inclinations are, according to metaphysical egoism, determined by the degree to which we are psychologically connected and continuous with others.


Thursday, October 19, 2006

Clarendon Law Lecture

I just returned from the Clarendon Law lecture here at Oxford given by Justice Stephen Breyer of the United States Supreme Court. This is the second time I have had the pleasure to hear a public talk given by an existing member of the US Supreme Court (the first being Justice Antonin Scalia). And, like the first experience, it was an amazing presentation and real joy to listen to.

Justice Breyer’s talk (the first of two) addresses issues from his new book Active Liberty: Interpreting Our Democratic Constitution.

Below I will do my best to outline the general thrust of his presentation, though this brief outline is from my memory and notes taken by me during the 60 minute talk:

Justice Breyer began by noting that, just 15 minutes before his talk began, that his 3rd grandchild had been born. After some opening comical remarks- which immediately warmed the audience to him- he noted that his talk would address two things. Firstly, he would begin with some reflections on the US Constitution. Secondly, he would talk about his new book, which is a book about democracy (more specifically participatory democracy).

First Section: What is the Constitution? What does it do? Justice Breyer succinctly summarized the various components of the Constitution. Basically, the Constitution creates the Federal government.

Justice Breyer emphasised the importance of James Madison’s insight that “This is a Charter granted by Liberty…”. Unlike Europe, where power existed with the King, the U.S. was to start afresh. And the people were to decide how to live together.

Justice Breyer also noted that most law in the United States comes from the (50) states. Of course the Constitution does trump state law. And each year the Supreme Court receives around 8000 requests to hear cases, and decides to hear roughly 80 or so cases. What are the criteria for deciding to hear a case? The main criterion is that the judges believe there is an issue that needs a uniform rule. (For example, because there is disagreement in lower courts).

Second Section: Justice Breyer then turned to the argument in his book. Why write the book? What is it about the Supreme Court that is so different from, for example, appellate court? The Supreme Court is special because you begin to see the Constitution as a steady guide… you see different parts of it (more frequently). Is it possible to see the document as a whole? Justice Breyer claimed that once you have been on the Supreme Court for a while, you do.

So the central answer as to what the Constitution does, for Justice Breyer, is it creates democratic instruments of government. It inspires a certain kind of democracy, one that protects rights, equality, separation of power (both horizontally and vertically) and it insists upon the rule of law.

One might then wonder: "What are judges doing?" Justice Breyer claimed that they are basically on border control. And life on the border can get difficult.

Then Justice Breyer turned to the notion of “Active Liberty”. This is the idea that the people are to participate in the system… to decide what kinds of rules they want to live by. And this reading of the Constitution (that its purpose is to facilitate the exercise of Active Liberty) can actually help Supreme Court judges decide a case… to arrive at a reasonable answer.

Justice Breyer then provided some examples. For instance, campaign finance reform. Does it violate the 1st Amendment? Not easy to answer (though some think it is easy to answer). Important question to ask: What is the 1st Amendment there for? Answer: for democratic government, to allow the people to participate. So facilitating dialogue is important. What happens when a handful of people (with huge contributions) take up all conversational areas? So recognising the importance of conversational areas is related to the democratic principle. And thus the real important questions, when it comes to campaign finance reform, are- when? How much? Under what circumstances? Etc. Such restrictions can further (not stop) the democratic conversation.

Justice Breyer also gave further examples dealing with Affirmative action, technology, security, etc. He stressed that it is important that judges come in (if they must) at the end (not the beginning). They should let the process work itself out.

This account of Active Liberty provides, argued Justice Breyer, a general view of law. One that pays a lot of attention to purposes and consequences. Most judges, when faced with a difficult legal case, utilise 6 basic tools: (1) look at the text (2) tradition: what have the words come to be in law? (3) precedent (4) history (5) purpose (or value underlying the text) and (6) consequences: what will happen if I decide this way versus that way. Justice Breyer stressed these last two (some judges emphasis the first 4 more).

Thus concludes Part 1 of Justice Breyer’s Clarendon Lecture. I am hoping to make it to the second part tomorrow evening if I can. It was a joy to listen to his judicial philosophy, and much of what he said resonated with my own thoughts about the role of the judiciary and democracy (as expressed here and here). I look forward to reading Breyer’s new book.


Gene Therapy for Parkinson's (Update)

Today's daily Google Alerts for Gene Therapy brings this encouraging story (from CBS News)about the recent success of gene therapy for Parkinson's Disease. The CBS story thus updates this CNN story about Nathan Klein, a man from NY state who was the first recipient of gene therapy for Parkinson's Disease. As that initial story reports, Klein had billions of copies of a gene inserted into his brain. A small hole was drilled into his skull, and a tiny tube "delivered a dose of a virus engineered to ferry copies of the gene into cells of the subthalamic nucleus. The gene will let the cells pump out GABA [a calming chemical]".

The updated CBS story describes Klein's current condition:

"Before the operation, I was a quivering mass of flesh," Klein, 58, tells WebMD. "With my medications, I am like 80 percent or 90 percent better. I am at a point right now where if you didn't know I had Parkinson's disease, you couldn't tell." Klein is delighted with the treatment — even though, as the first patient, he got a much lower dose than did the next 11 patients to be treated. Moreover, none of these 12 patients got the full treatment. As a safety precaution, only one side of their brains was treated. Yet this half-treatment seems to work as well as deep brain stimulation, the best new treatment for Parkinson's disease. And it's safe, reports therapy co-developer Matthew J. During, M.D., DSc, of New York-Presbyterian Hospital and Cornell Weill Medical Center.

During, co-founder of Neurologix Inc., the company that is developing this gene therapy, comments on Klein's progress and the state of the experimental therapy:

During says that Klein's movement problems got about 40 percent better after treatment. Not all patients have done that well. But those who got the highest dose tended to get the most improvement. One year after treatment, nine of the 12 patients showed an average improvement of 37 percent on a measure of Parkinson's disease severity. Five patients had between 40 percent and 65 percent improvement. "We are encouraged the results seem to be maintained over time," During says. "When we look at the other side of the brain, the side we did not treat, we always see worsening of disease. This is a progressive disease. If we hadn't done the treatment, we would expect these patients to be doing significantly worse." These very encouraging results don't prove anything, During notes. That will depend on a large clinical trial in which participants aren't aware of what treatments they receive and some participants receive no active treatment (placebo). A trial like this is in the planning stages.

What is Parkinson's Disease? The National Parkinson Foundation has a useful website here. Here are a few of the facts contained on that site:

What is Parkinson disease?
Parkinson disease is a brain disorder. It occurs when certain nerve cells (neurons) in a part of the brain called the substantia nigra die or become impaired. Normally, these cells produce a vital chemical known as dopamine. Dopamine allows smooth, coordinated function of the body's muscles and movement. When approximately 80% of the dopamine-producing cells are damaged, the symptoms of Parkinson disease appear.

Who gets Parkinson disease?
Parkinson disease affects both men and women in almost equal numbers. It shows no social, ethnic, economic or geographic boundaries. In the United States, it is estimated that 60,000 new cases are diagnosed each year, joining the 1.5 million Americans who currently have Parkinson disease. While the condition usually develops after the age of 65, 15% of those diagnosed are under 50.


Tuesday, October 17, 2006

Genetic Justice Presentation

On Friday I presented my paper "Genetic Justice Must Track Genetic Complexity" to the James Martin Advanced Research Seminar here at Oxford. I am grateful for all the useful feedback I received on the paper and presentation.

Oxford is an ideal place to study genetics and justice as they house both the Program on Ethics and the New Biosciences and the Future of Humanity Institute, in addition to The Ethox Centre (not to mention the Centre for the Study of Social Justice). So there are an impressive number (and caliber) of researchers, from different disciplines, addressing a diverse range of social, legal and ethical issues related to the genetic revolution.

My paper addressed the issue of how we ought to modify the currency of distributive justice to include our genetic potentials for the natural primary goods. This is what I mean by developing a theory of "genetic justice". Of course genetic justice addresses only a subset of concerns that arise in light of the genetic revolution. More specifically, genetic justice addresses the issue of how we should regulate direct genetic interventions (e.g. somatic gene therapies). And I attempt to place the demands of genetic justice against the background of a more general account of societal fairness. In other words, we should not address the issue of genetic justice in an insular fashion, as if the duty to directly mitigate genetic disadvantage was the only demand of justice we need to consider.

My research on genetics and justice has had a profound impact on the way I think political philosophers should approach distributive justice more generally. Namely, they ought to function at the level of "non-ideal" rather than "ideal" theory. The reasons for this are many-fold. Firstly, a fact-sensitive account of justice is more likely to lead us to viable prescriptions that we could implement or pursue in the "here and now" (rather than in some imagined, ideal scenario). Secondly, a fact-sensitive account of justice is less likely to rely on contentious arm-chair theorising that political philosophers cannot adequately defend (nor feel they must defend) when they can simply say "I'm talking about ideal theory...". So functioning at the level of non-ideal theory compels a political philosopher to engage in a self-conscious dialogue (and ideally a real dialogue as well!) with others (e.g. legislatures, courts, healthcare administrators, citizens, etc.) and this can enhance the value of a normative analysis of issues of justice. Functioning at the level of ideal theory is unlikely to have this result as the political philosopher will see issues of feasibility (for example) as a distinct (secondary)problem, not one that has a fundamental bearing on the derivation of the principles of justice themselves. This further widens the gap between theory and practice.

Of course ideal theorists will retort that non-ideal theory also has its problems. I agree. So much more attention needs to be given to these issues. But when we weigh up what we have lost (as a result of the ideal paradigm) I think it is worth taking the risk of moving closer towards the non-ideal theory camp.

My concerns about the methodology of political philosophy are important for addressing the issue of genetics and justice. My central worry is that we will be primarily concerned with having a debate at the level of ideal theory. So, for example, we might assume safe and effective genetic interventions will just land on our laps like manna from the heavens. And then all we need to do is just figure out what the just distribution of such interventions would be (e.g. genetic decent minimum, genetic equality, etc.). Or we might ignore the possibility that the duty to prevent harm to our offspring could potentially conflict with respect for procreative liberty. For example, if the mode of intervention was a prenatal genetic therapy. Or we might ignore the fact that the effort to directly mitigate genetic disadvantage will have an impact of the budgets available for redressing other forms of disadvantage (e.g. poverty, etc.). So if we want a normative theory to be of use to us in the "here and now", we must advance a theory of genetic justice that is suitably fact-sensitive. I have posted a few comments about this in an earlier post.

So to return to the paper I presented on Friday... I argued for two things. Firstly, that a necessary condition of a defensible account of genetic justice is that it must track genetic complexity. Genetic complexity encompasses phenomena such as polygenetic traits, gene-gene interactions and complex environmental influences (Alper, 2002, p. 22). By tracking genetic complexity, the principles of genetic justice will (at least for the foreseeable future) be largely indeterminate. Such indeterminacies should not be regarded as a failure to utilise or properly execute the skills of analytic philosophy. Rather, such indeterminacy simply reflects the realities of the complex nature of both human genetics and the demands of justice in the real, non-ideal world. That is, a world that is characterised by both scarcity and pervasive disadvantage.

Secondly, I argue that pluralistic prioritarianism is a theoretical position well-suited for tracking genetic complexity. Prioritarians maintain that benefiting people matters more the worse off these people are (Parfit, 2000, p. 101). But a defensible version of prioritarian justice needs to be pluralistic in the following two ways. Firstly, it must recognise that diverse forms of disadvantage pervade our societies (e.g. genetic disease, poverty, crime, accidents, etc.) and there are a plurality of ways of redressing disadvantage. Mitigating these diverse disadvantages will come from the same inevitably limited budgets. Secondly, prioritarians must be pluralistic in that they seek to balance their prioritarian commitments with other values, such as utility and freedom. Consideration must be given to the severity and pervasiveness of different forms of disadvantage, the costs of mitigating these different disadvantages, and the likelihood that the benefits of mitigation will be realised.

So what principle of genetic justice can emerge from all of this as a viable and attractive distributive principle? I believe it is what I call the lax genetic difference principle, a principle I defended here. This principle states: "inequalities in the distribution of genes important to the natural primary goods are to be arranged so that they are to the greatest reasonable benefit of the least advantaged". Having a serious debate concerning what constitutes a reasonable and unreasonable benefit to the genetically disadvantaged is the debate I hope we have (rather than an idealized debate). Such a debate would illuminate (rather than ignore or bracket) a number of the difficult issues we face as we attempt to implement a fair distribution of the by-products of the genetic revolution. These issues range from determining how robust genomic intellectual property rights should be to the limits and scope of reproductive freedom. So by tackling the issue of genetics and justice one addresses complex concerns about how we ought to regulate biotechnology, but one also raises interesting methodological questions concerning how we ought to construct normative theories and what such theories are for.


Saturday, October 14, 2006

Genetic Alterations in Cancer

This week's issue of Science has a very interesting research article on genetic alterations in cancer. Here is the title and abstract.

"The Consensus Coding Sequences of Human Breast and Colorectal Cancers" by Tobias Sjöblom, et. al. Science 13 October 2006: 268-274.

Abstract: The elucidation of the human genome sequence has made it possible to identify genetic alterations in cancers in unprecedented detail. To begin a systematic analysis of such alterations, we determined the sequence of well-annotated human protein-coding genes in two common tumor types. Analysis of 13,023 genes in 11 breast and 11 colorectal cancers revealed that individual tumors accumulate an average of 90 mutant genes but that only a subset of these contribute to the neoplastic process. Using stringent criteria to delineate this subset, we identified 189 genes (average of 11 per tumor) that were mutated at significant frequency. The vast majority of these genes were not known to be genetically altered in tumors and are predicted to affect a wide range of cellular functions, including transcription, adhesion, and invasion. These data define the genetic landscape of two human cancer types, provide new targets for diagnostic and therapeutic intervention, and open fertile avenues for basic research in tumor biology.



Thursday, October 12, 2006

Voluntarism and Tax Breaks

Originally posted April 30th, 2006

The following post is an opinion article I wrote for Newsday (a NY newspaper) in May 2005. The commissioning editor invited me to write an opinion piece on an issue that was getting a lot of local attention in NY at the time- tax breaks for volunteer emergency personnel. In the aftermath of 9/11 New Yorkers, perhaps more than anyone else, appreciated the value of the brave contributions emergency personnel make to society. But in this article I tried to temper such sentiments with some reflections on the meaning of voluntarism and by placing the issue within the larger context of societal fairness.


Rewards, yes, but tax breaks, no
[published in Newsday, May 1st 2005]

By Colin Farrelly

Emergency personnel provide an essential public service, and many risk the ultimate sacrifice — their own safety. In the aftermath of 9/11, gratitude for the heroism and dedication of these workers has inspired support for tax relief for volunteer firefighters and ambulance workers around the United States.

Legislation is now being proposed in Rockland County, for example, that would allow such volunteer workers to shop tax-free. “Any kind of incentive you can give the volunteers is a good thing,” the bill’s sponsor has said.

For Long Island, in 2004 Gov. George Pataki signed into law legislation that increased the property exemption benefit on the homes of eligible Suffolk fire and ambulance volunteers. They are permitted a 10- percent exemption on the assessed value of a home, rather than the previous cap of $3,000. Yet, in February, the Harborfields Central School Board decided not to grant the tax exemption to volunteer firefighters in the fire departments of Greenlawn and Centerport. The board wrote that it appreciates the service of the volunteers, but that the exemption would result in $52,271 “that would have to be borne by others in the community.”

Given the different stance that localities have taken on this issue, it is worthwhile to consider the question: Are tax breaks for volunteer emergency workers appropriate and fair? A proponent of tax relief might argue: "These volunteers make enormous sacrifices- they literally risk their lives- and giving them a tax break is a way of expressing our public gratitude for their sacrifices." No one would argue that these volunteers aren’t making a substantial contribution. But are tax breaks an appropriate form of recognition?

We admire volunteer firefighters and emergency workers because their actions go over and above a citizen’s call of duty. Some kind of public recognition is no doubt required. But monetary compensation, in the form of tax breaks, is arguably inappropriate and potentially unfair. The satisfaction of contributing without pay is part of the reason this work is rewarding for many volunteers. To offer what is effectively a salary is to reduce the most admirable characteristic of humans (benevolence) to the impoverished measure of value that already consumes society- money. The more significant the tax break is for such volunteer work the less it can plausibly be described as "volunteer" and the less prestige in the community this work will have.

Perhaps more important, the problem with granting such tax breaks is that such policies could potentially exacerbate existing injustices. The Harborfields school board clearly is concerned with this question: How will localities make up the loss in public revenue? This leads to more questions: Will extra tax-burdens be placed on citizens? If so, will this be done in a fair manner so the extra burden does not fall on those who already suffer economic hardships? If taxes will not be raised, then what will localities be willing to cut in order to offset the loss of public revenue?

It is one thing to say you support tax relief for volunteer emergency workers, but it is hypocritical to do so if you are not willing to subsidize such tax breaks out of your own pocket. To be fair across the board, tax breaks for volunteers must be made in conjunction with higher taxes on the more affluent, so that the rewards for volunteers do not also inflict unfair burdens on the vulnerable.

There are, of course, other possible arguments that are used to support tax breaks for volunteer emergency workers. Some make a pragmatic case that such tax breaks could help bolster lagging recruitment efforts. It would cost far more, this argument goes, if the locality is forced to replace the volunteer organizations with a paid force. The Greenlawn fire chief estimates, for example, that it would cost taxpayers $1.2 million to $1.4 million for one engine with five firefighters and one ambulance with two EMS workers, far more than the $52,271 tax break.

Faced with the choice between creating incentives to bolster volunteers or incurring the costs of a paid force, one might reasonably claim that tax breaks for volunteers are necessary. But that doesn’t necessarily make them fair. Why don’t we also give tax breaks to other dedicated volunteers who make admirable sacrifices that benefit our communities, such as literacy volunteers ? Fairness requires us to treat like cases alike, so is it fair to give tax breaks only for certain kinds of volunteer contributions and not others?

Fairness also requires us to consider the moral general question of why it is that localities face such a dilemma in the first place. When tax breaks are expected for voluntary sacrifice in service of the community, and the assumption is that a monetary reward is the only sign of sincere gratitude for civic contributions, one has to wonder whether there is a larger, more pressing, problem that we have failed to diagnose and address.

Tuesday, October 10, 2006

Genes and Aging

The Sept. 2006 issue of Nature Genetics has an interesting article by Siegfried Hekimi entitled "How Genetic Analysis Tests Theories of Animal Aging" (subscription needed). The phenomenon of aging raises a number of interesting ethical and social issues.

Recall that in an earlier post I noted that I have been giving some thought to the proposal that we ought to wage a war on aging itself. As I read through Hekimi's paper these issues came to me again. As a prioritarian I am of mixed minds about the prospect of such a war being waged, given the facts of scarcity and pervasive disadvantage. Much of course depends on what the proposed means of fighting such a war are, how proportionate our attention to this cause is relative to other pressing moral demands, as well as the likely magnitude of the benefits of waging such a war. I think such concerns inform, for example, the Longevity Dividend Campaign, hence the reason I support it.

The more I think about the phenomenon of aging the more I realise that we need to critically assess our current attitudes towards aging and the effort to extend the human health span. Here are a few extracts from Hekimi's paper that got me thinking about these kinds of concerns:

Aging. Aging is the increase in the probability of dying with the passage of time. It is also the increased susceptibility for any of a number of diseases, regardless of whether they are ultimately responsible for the deaths of the individuals developing them. In addition, aged individuals are less resistant to injury, whether from physiological accidents (for example, surviving a heart attack) or environmental accidents (for example, bone facture), and they are less resistant to infection. Aging is a phenotype (of which life span is one feature); consequently, the pattern of aging depends on the genotype and on the environment: different species, as well as different strains from a single species, such as the mouse, and different human populations, develop different physiological or anatomical alterations with age and die from different age-dependent diseases.

....Is what we call 'aging' a mechanism or concept? One can describe at least two views of the meaning of the term 'the aging process'(
Fig. 2- [inserted below]). In one view, aging is a biochemical mechanism that does not induce disease in itself, but whose progression is an ever-increasing risk factor for both the onset and the severity of age-dependent diseases and leads to a lesser ability to resist, survive and recover from physiological disturbances. In the other view, the aging process is but the conceptualization of the collection of all age-dependent diseases. In a given species, such as humans, age-dependent diseases, whether immediately life-threatening or less severe at first, arise in a pattern that is relatively stereotyped within a population and is easily recognized as the consequence of increasing age, both by visible signs (graying, wrinkles, increased body fat, changes in posture) and by patterns of disease prevalence (such as atherosclerosis, cancer and, later in life, neurodegenerative diseases). Interestingly, the view that the molecular mechanism that underlies aging is the accumulation of unrepaired damage seems to be consistent with both views of aging. Indeed, in the first view, the effect of the accumulated damage is diffuse: it creates a decrease in the strength of homeostasis that leads to an increase in the risk of developing disease, but it does not initiate disease. In the other view, the damage directly produces disease, with different types of damage producing different diseases.

[Nature Genetics - 38, 985 - 991 (2006)]
Siegfried Hekimi

....Although the existence of mutants with increased life span could have been predicted based on evolutionary theory, the finding that there are mutations that can increase life span as massively as some do could not have been predicted. For example, mutations in the daf-2 or isp-1 genes of Caenorhabditis elegans can increase life span twofold, and this can be further doubled in combination with other single-gene mutations such as in daf-2 clk-1 double mutants4. As already discussed, it is invariably observed that toward the end of their life span, animals develop a variety of age-dependent diseases. In a given species, strain or population, a single disease can be a major cause of death, but avoiding that particular disease does not ensure indefinite survival, because some other lethal disease of later onset will have the opportunity to develop. This is illustrated by the finding that mice that are partially resistant to cancer do not really live longer and by the statistical prediction that if we could cure cancer or atherosclerosis in humans, life span would be lengthened by only a few years56. What to think, then, of a mutation in a single gene or pair of genes that can increase animal life span several-fold? Considering worm mutants that live several times longer than the wild type, one realizes that to obtain such an effect, every single degenerative process that normally kills wild-type worms must be dramatically delayed, and all the mechanisms of somatic maintenance and structural elements that have evolved to suffice for only one lifetime become capable of functioning and supporting life during several lifetimes. This observation implies that there are only a handful of processes that together are causal to all manifestations of aging, which is why mutations that can affect one or a few of these processes can have such large effects. Although the magnitude of the life span effects that have been obtained by genetic manipulation in mammals have not been as impressive as in invertebrates and caution is still required for the interpretation of some of the currently available data studies, the increases are sufficiently large (up to 50%) that it is arguable that this could not have been achieved if only a single disease process had been affected by these genetic changes.


Monday, October 09, 2006

What is Political Theory?

Originally posted April 25th, 2006

Political theorists are concerned with how we ought, collectively, to live together. The job of the political theorist is to bring some precision to fundamental (yet contested) political concepts- like freedom, equality, democracy and justice. Ideas are powerful things, they exert great influence on the real world and help determine the fate of the lives of billions of people. So the political theorist’s job is an important job. A diverse range of social, economic and political arrangements can (and have been) defended by reference to abstract political values. And the theorist helps equip us with the analytical tools necessary to differentiate between legitimate functions of government and the arbitrary use of power.

Suppose a political theorist puts forth a new theory, or advances a nuanced spin on an existing theory. By what standards should we evaluate such a theory? In other words, what are the criteria by which we measure success and failure in political theory? What makes good theories “good”, and lousy theories “lousy”?

It is not surprising that political theorists of different stripes will give different answers to these questions. Liberals will likely emphasize criteria they believe are important to a defensible political theory (e.g. the promotion of toleration and autonomy), while socialists want a theory that recognizes the exploitative nature of capitalism. Feminists believe a theory must be equipped to deal with patriarchy and multiculturalists want a theory that addresses cultural inequality and difference. With so many theorists functioning with different specific ideas of what makes for good political theory, it is not surprising that students often find it difficult to know what they are expected to do in a political theory class (when it comes to writing an essay and critically assessing these debates).

I suspect everyone who has taught a course in political theory frequently encounters the following kind of comment when students begin debating rival normative theories: “Isn’t it all just a matter of opinion? There are no right or wrong answers!”. To this remark an instructor will no doubt invoke his/her well rehearsed response, which goes something like this- “Some theories are backed by reasoned arguments and sound premises, while others might be based on mistaken or misguided premises. Our job is to figure out which positions can withstand rigorous critical analysis and which cannot”. But such manoeuvring simply side-steps the important question: What makes for “good argument” or “sound premises” when one is talking about a normative discipline like political theory? Great question! And like all great questions it is difficult (yet fun!) to try to answer. To answer this question I think one needs to provide a few more specifics about what political theory actually is.

I think the best characterisation of political theory is that advanced by John Dunn (1990) in ‘Reconceiving the Content and Character of Modern Political Community’. Dunn claims that the purpose of political theory is to diagnose practical predicaments and to show us how best to confront them. Doing this, he adds, requires us to develop the following three distinct skills.

1. Ascertaining how we got to where we are and understanding why things are this way.

2. Deliberating about the kind of world we want to have.

3. Judging how far, and through what actions, and at what risk, we can realistically hope to move this world as it now stands towards the way we might excusably wish it to be. (Dunn, 1990, p. 193)

The three skills identified by Dunn require a political theory to be well grounded in terms of both the normative and empirical assumptions and arguments it relies upon. The first skill requires a good comprehension of the empirical realities of the world. What are the social, political and economic histories of our societies? Telling this story is useful for understanding the current predicaments of one’s society. It might help us to understand why concerns of racial or gender inequality arise, or concerns about environmentalism, healthcare and welfare reform. Knowing something about the history of the culture, people, political institutions, economy, etc. of the society in question is important for being able to both diagnose its current ills and make a realistic prescription for remedying these predicaments.

Knowing one’s past is important for deliberating about what is feasible for one’s future. Thus the first skill relates to the second and third skills noted by Dunn. The third skill requires a political theory to be somewhat pragmatic in terms of confronting the range of options realistically open to us as we aspire for a more just and desirable social arrangement. If a political theory is not adequately grounded in reality it risks being discarded as mere “pie in the sky”. And this raises important questions about the second skill. As Dunn notes, this second skill is less explicit in its demands for imaginative self-discipline. If this second skill is not tempered by the first and third skills I believe we risk jeopardizing the value of political theory/philosophy.

Keeping Dunn's three skills in mind are helpful when constructing and assessing political theories. A good deal of the disagreement among contemporary political theorists stems from differing opinions concerning what constitutes a "healthy exercise" of these different skills.


Friday, October 06, 2006

Hereditary Blindness

The CBC reports this story about the discovery of gene CEP290. This mutated gene is the leading cause of blindness in newborns. Here is a snippet from the story:

"We can now describe the genetic basis of hereditary blindness in babies in up to 75 per cent of cases and that's spectacular," said Koenekoop.

"I would say this is the single most important gene for congenital blindness just based on numbers, frequency and impact."

Children with LCA lose their vision because the gene defects cause the cells in the eye to die or malfunction. They have trouble fixing on objects and their eyes appear to jiggle in their sockets.
The disorder affects one in 30,000 newborns. About 10,000 young Canadians and 150,000 people around the world suffer from the hereditary disease.

Seed and her husband, Mike, know the heartbreak of the diagnosis personally.
Her son, Erick, was diagnosed at about four months after her mother noticed something wrong.
"I had never had a baby before and thought all new babies had really kind of roving eyes," she said. "All of a sudden our whole world turned upside down and our whole paradigm of us as parents and us as a family completely shifted." But Seed's despair turned to hope after Koenekoop called this year to say her son had the CEP290 gene. "It was the second best group of news that I could ever hear in my whole life - the first being that my son could see."

The discovery last July of the ninth gene responsible for LCA means gene therapy treatment may be developed that could partially restore vision. The Foundation Fighting Blindness, which partially funds Koenekoop's research, is delighted. "It certainly brings great hope for the future for our families," said executive director Sharon Colle.

The first human gene replacement therapy trial for vision is slated to take place in Philadelphia in January on another LCA gene - RPE65. If successful, it could lead to a treatment for the million patients around the world who suffer from juvenile and adult forms of the congenital ailment. In animal studies, researchers have found 30 per cent of sight can be restored with gene therapy.

"There's certainly hope on the horizon with regards to gene therapy and also stem cell research," Seed said. The gene discovery also opens the possibility of prevention, Koenekoop said.



Thursday, October 05, 2006

Noble Prize in Medicine

Andrew Z. Fire and Craig C. Mello have won this year's Noble Prize for Medicine for their discovery of RNA interference . See this.

You can view an animated tour through the process of RNA interference here.

Msnbc has an interesting intervew with Andrew Z. Fire here. The interview explains the process of RNA interference and its potential applications. Here is a sample:

NEWSWEEK: You won the Nobel Prize for uncovering the process of RNA interference. Can you explain to our readers exactly what this entails?

Andrew Z. Fire: Our observations built on previous work by others in plants and fungi. When people tried to put extra copies of a gene into a plant, instead of getting more of the result, they got less. It seemed there was a way that the organism had to sense that there were extra copies—and not just extra copies but some that were also probably somehow messed up. So the organism had to be able to detect unwanted activity. And if it sensed this activity, which is going to come out in the form of some kind of RNA [an intermediary between stable DNA, which stores information in our genes, and proteins, which act on the information] that is unusual, it went ahead and got rid of them. We studied something similar with worms.

What did you find?

We wanted to know how an organism can tell the difference between foreign RNA and what's inside. It turns out it's a fairly simple structural distinction. If we took double-stranded RNA and put it into cells, that could shut a gene down pretty efficiently. The organism is looking for RNA where both strands present in the cell (normally, it's just one strand). That was essentially the result. We could put two strands of an RNA in and it would shut down the corresponding gene...The cell not only gets rid of double-stranded material but looks for anything that looks similar and gets rid of it too.

What are the potential applications for humans?

The biggest medium-term application is researchers who are studying a given question. For example, let's say we're interested in an individual cancerous tumor—a population of cells that are doing something against our best interest—and want to know what makes it so we can get rid of it. One way is to go on a gene by gene basis through all the thousands of genes and decrease the function of each one at a time to figure out which are needed for the tumor to grow. The hope in that case is to find the gene that is needed for the tumor but not required for normal cell growth. So far, the experiment has been in a lab. If you had that information and really believed RNA to be a gene-silencing tool, though, you could imagine taking double-stranded RNA into a sick person and making them healthy. People are exploring that, but delivering it is very complicated. Some clinical trials are being done in tissues that are very good at receiving the RNA material like the eye and the liver.

What diseases are being targeted in the clinical trials?

One is macular degeneration, which causes blindness. The reason it's moved so quickly to clinical trials mostly has to do with the ethics of doing clinical trials. No other treatment is approved for macular degeneration. Also, the eye is fairly self-contained. It's easier to test it in the eye in a clinical trial than injecting it into the bloodstream...

This announcement from the Noble Foundation is one additional snapshot of the larger story that is unfolding around us with rapid speed. The genetic revolution is an incredible story of human ingenuity and determination. One that could have a great impact on the life prospects of the genetically disadvantaged.


Monday, October 02, 2006

Nanoethics Paper

I have just finished a paper entitled “Deliberative Democracy and Nanotechnology” which is forthcoming in Nanoethics: Examining the Societal Impact of Nanotechnology (NJ: John Wiley and Sons Inc.) edited by Fritz Allhoff, Patrick Lin, James Moor and John Weckert. I hope to post the full paper on my research page in due time. But until then here is a sample (the introduction) of the paper.


Two recent reports concerning nanotechnology illustrate both its potential promise and its potential peril. The first report is the encouraging news that nanotechnology might help in the delivery of gene therapy (Dobson, 2006). World-wide, there are over 1 000 clinical trials for gene therapy. There are currently 797 clinical trials for cancer, 102 trials for monogenetic diseases and 106 for vascular diseases.[1] Gene therapy involves switching off defective genes or inserting desirable genes into the cells to prevent or cure disease. One of the major obstacles facing gene therapy has been gene delivery. That is, ensuring that the desired genes get into the correct cells. Nanotechnology might provide a solution to this problem. The efficacy of magnetic nanoparticle-based gene delivery has been demonstrated most clearly in vitro (Dobson, 2006, 286). This technique involves coupling genetic material to magnetic nanoparticles. “The particle/DNA complex (normally in suspension) is introduced into the cell culture where the field gradient produced by rare earth magnets (or electromagnets) placed below the cell culture increases sedimentation of the complex and increases the speed of transfection (Dobson, 2006, 283).

The optimism one takes from the encouraging news about utilizing nanoparticle-based gene delivery is often tempered by news reports concerning the potential dangers of nanotechnology. The April 15th (2006) issue of the Economist reported the story about Magic Nano, a bathroom cleaner that contains tiny silicate particles that reduce the scope for dirt and bacteria to cling to surfaces. Magic Nano went on sale in Germany in March 2006. Three days after it went on sale it “was withdrawn from the market after nearly 80 people reported severe respiratory problems and six were admitted to hospital with fluid in their lungs” (Economist, 2006, 80). The Magic Nano incident lead critics of nanotechnology, like Action Group on Erosion, Technology and Concentration (ETC Group), to call for a global moratorium on nanotech research[2]. The concern (whether just perceived or real) that nanomaterials pose serious environmental, health, and safety risks is one of the major obstacles facing these new technologies.

The contrasting stories we hear concerning the potential pros and cons of nanotechnology illustrate the importance of taking seriously the question of what would constitute an ethical regulation of nanotechnology. Nanotechnology covers such a diverse spectrum of technologies (e.g. therapeutic, etc.) that different values and principles are appropriate for regulating different kinds of nanotechnologies. Those who feel that both sides of the pro- and anti-nanotechnology debate have valid concerns might feel that the real challenge we face is finding a reasonable compromise between these different values rather than crowning any one value (e.g. efficiency) or principle (the pre-cautionary principle[3]) as “supreme”.

One social theory that offers us a pluralistic and contextual ethical analysis of nanotechnology is deliberative democracy. Instead of trying to win a philosophical argument concerning the viability of first-order principles (e.g. efficiency, safety, etc.), deliberative democrats are more concerned with determining what would constitute a reasonable balance between conflicting fundamental values. In this paper I will examine what deliberative democracy can prescribe in terms of addressing the ethical and social concerns raised by nanotechnology. By examining how deliberative democracy applies to nanotechnology we see that an ethical regulation of nanotechnologies requires a division of labor between many different institutions and individuals. I argue that deliberative democracy prescribes that an ethical regulation of nanotechnologies requires responsible legislative activism which in turn requires accurate scientific information as well as an informed and reflective citizenry. Thus the ethical obligations of deliberative democracy extend to the way scientists conduct and communicate their research as well as to the way the media reports about nanoscience. The actions of scientists and journalists play a vital role in the formation of the reflective preferences of the larger citizenry. Thus deliberative democrats believe that informed, reasoned debate on accommodating the different stakes involved with regulating different kinds of nanotechnologies is essential if we hope to implement a fair and humane regulation of these new technologies.