University of South Florida

Personalized medicine: A work in progress

From time to time I am asked about the state of “personalized” medicine, or “precision” medicine. Usually, these terms refer to customizing some aspect of medical care based on an individual’s genome. This has come to be a range of “omics”-related tests, such as DNA variation, protein or mRNA expression, metabolites, and biomarkers. Typically the goal is to improve the ability to predict who is at risk for developing a disease, to predict the clinical characteristics and prognosis of the disease, and to predict the best drug(s) for treatment (called pharmacogenetics or pharmacogenomics).

When the human genome projects were published simultaneously in Nature and Science, I was asked to write an editorial for Nature Medicine on the implications of the projects for pharmacogenetics. I asked Nature for a preprint of the article in advance of writing the editorial. To my surprise, I received the entire issue and access to the data about a month before publication. I was overwhelmed with excitement about what this “roadmap” could do for biomedical research.

We all had been receiving bits and pieces of the genome as it was being sequenced (initially mailed by the NIH on CD-ROMS to those who requested it), but when seen in its “final draft form,” with the analysis, it was breathtaking. I was so sure of its power, that in my editorial, I stated that by 2010 personalized medicine “should be in place” (1). Well, the field has not moved that quickly. This is particularly so for common diseases, while cancer risk, prognosis, and therapy has shown demonstrable progress.

Challenges of customizing medicine based on genome

Omics-based personalized medicine is not available in the everyday clinic for several reasons. One reason is our failure to understand the effect of our environment on disease and its treatment. We now can obtain millions to billions of pieces of data from a person’s genome, but how much objective information do we have on that individual’s lifetime environmental influences? Very little, it turns out. So, we have a mismatch between the depth and breadth of information at the omics level, and a patient’s environmental variables.

That same mismatch is also present for clinical variables. Early research looking for associations, between say DNA sequence variation and response to a drug, often did not include sufficient environmental, demographic, and medical information. So, some associations turned out to be not true at all, or, the effect of a given variation was not very large. This brings to mind the other shortcomings of many early studies, where we did not consider multiple variations, in multiple genes, as the basis for a given risk or response to drug. In essence, we were too restricted in our thinking.

And then, most studies included too few patients. So while results from a single study might be statistically significant, they were not necessarily applicable to the general population. Those early studies often did not take into account the racial background of participants, again leading to spurious conclusions and lack of applicability to patient care models.

Pharmacogenetics and predicting warfarin dosing

The poster child for pharmacogenetics, for a while, was testing to predict the dose of warfarin needed for patients requiring anticoagulation. These included, initially, variations in metabolizing genes in the cytochrome P450 system. Early studies were promising, and apps for smart phones began to appear so that a prescriber could input the genetic test results from a few CYP variants and predict a starting dose and/or the maintenance dose required.

The standard of care for warfarin dosing is frequent monitoring of the international normalization ratio (INR), a measure of the extrinsic pathway of coagulation essentially derived from the prothrombin time. Once the INR stabilized, you still need to check it occasionally for under- or over-anticoagulation. Studies comparing algorithms based on genetic testing versus a traditional coagulation clinic approach have been somewhat equivocal, taken as a whole. Endpoints evolved into what might be considered “softer” than originally hoped. These include: shorter time to stable dose, improved percent time in therapeutic range, and bleeding risk. The GAGE algorithm (2) requires 10 nongenetic input variables, and part of it is shown in the graphic below.

And the caveats to this algorithm are: “Data strongest for European and East Asian ancestry populations and consistent in other populations. 45–50% of individuals with self-reported African ancestry carry CYP2C9*5, *6, *8, *11, or rs12777823. If CYP2C9*5, *6, *8, and *11 were not tested, dose warfarin clinically. Note: these data derive primarily from African Americans, who are largely from West Africa. It is unknown if the same associations are present for those from other parts of Africa. Most algorithms are developed for the target INR 2-3. Consider an alternative agent in individuals with genotypes associated with CYP2C9 poor metabolism (e.g., CYP2C9*3/*3, *2/*3, *3/*3) or both increased sensitivity (VKORC1 A/G or A/A) and CYP2C9 poor metabolism. See the EU-PACT trial for pharmacogenetics-based warfarin initiation (loading) dose algorithm with the caveat that the loading dose algorithm has not been specifically tested or validated in populations of African ancestry. Larger dose reduction might be needed in variant homozygotes (i.e., 20–40%). African American refers to individuals mainly originating from West Africa. These algorithms compute the anticipated stable daily warfarin dose to one decimal and the clinician must then prescribe a regimen (e.g., an estimate of 4.3 mg/day might be given as 4 mg daily except 5 mg 2 days per week)” (1).

Really? Is this the best we can do? I leave it to the reader to ascertain if the community is where it needs to be for use of these tests in common practice.

Need for laboratory research to augment clinical studies

One thing that remains clear is the lack of in vitro and in vivo testing of mechanisms of action for genetic variation. Not infrequently a large study shows some association with a coding or noncoding variant, but little effort is made to study how the variant alters protein expression or function. Thus, we only know that this variant is a “marker.” It could be in linkage disequilibrium with the actual functional variant, or, it could change the function in a way that we do not understand. Studies can be with cells or other methods, but may require generation of genetically altered mice to add the environmental and physiological component. Wet lab research is critical to moving personalized/precision medicine forward, but is not well-funded compared to the clinical studies.

So, at USF Health we offer personalized medicine based on an individual’s genome (or the tumor genome) when evidence-based research has shown a clear advantage. We also have studies underway in the research realm, where we are accumulating results to ascertain which tests are useful to improve patient care. So my answer to the question of the state of personalized medicine, is “it is a work in progress.” 

  1. Liggett, Stephen B. “Pharmacogenetic applications of the Human Genome project.”Nature Medicine 3 (2001): 281-283.
  2. Gage, B. F., et al. “Use of pharmacogenetic and clinical factors to predict the therapeutic dose of warfarin.”Clinical Pharmacology & Therapeutics 3 (2008): 326-331.
  3. Johnson, Julie A., et al. “Clinical Pharmacogenetics Implementation Consortium (CPIC) Guideline for Pharmacogenetics‐Guided Warfarin Dosing: 2017 Update.”Clinical Pharmacology & Therapeutics (2017).

Stephen Liggett, MD
Associate Vice President for Research, USF Health
Vice Dean for Research, USF Health Morsani College of Medicine
Professor of Medicine, Molecular Pharmacology and Physiology

Stephen Liggett_2015_Preferred_headshot

Network-wide options by YD - Freelance Wordpress Developer