Cell Culture Drug Resistance Testing (CCDRT) Cell Death Assays:
Misconceptions Versus Objective Data
The Modern Empiric Therapy Paradigm
One of the greatest ironies in cancer treatment research was the success story in the treatment of childhood leukemia. Forty-seven years ago, this was a universally fatal disease, with survivals measured in months. Over the ensuing years, this disease has become progressively more curable, owing to a series of prospective clinical trials, in which patients were randomized between the best current therapy and a putatively-improved, but empirical form of therapy. Gradually, therapies got better and better and today the majority of children with this disease are cured.
What happened next is that the NCI and medical school professors with NCI funding attempted to apply the childhood leukemia paradigm to all forms of cancer, with stunningly bad results. We have made virtually no measurable progress at all in treating advanced cancer over the past 20 years (Bailar and Gornik, NEJM 336:1569,'97). In my opinion one of the major reasons for this lack of progress is the lack of systems to model the behavior of real, clinical, human cancer. Most preclinical research has been carried out in animal tumors and in immortal cell lines (Gerald Dermer: The Immortal Cell: Why Cancer Research Fails, 1996). Most clinical research is in the form of prospective, empiric, randomized trials designed to identify the best treatment to give to the average patient, in a disease notorious for its heterogeneity, where no one is average. Of all the leading causes of death, cancer has arguably the most inadequate systems to model the behavior of human disease. The scandal is not so much that the NCI does not have valid models with the same invaluable utility as bacterial "culture and sensitivity" tests, but rather that the NCI and American universities have made virtually no effort to develop such tests. The most obvious and promising models are those based on the study of drug effects on real human cancer tissue, freshly removed from the patient. Not only have the NCI and universities not made serious efforts to develop "culture and sensitivity" assays for cancer, but they have (both intentionally [e.g. Weisenthal, JNCI 84:1288-90,'92] and inadvertently) done virtually everything possible to discourage research in this area. As a result, the most important progress in this field during the past 15 years has taken place in the private sector and, increasingly, in Europe and Japan. There is an overwhelming amount of published, peer-reviewed literature to establish, beyond the shadow of a doubt, that existing assays are capable of identifying both poor prognosis therapies and good prognosis therapies, with good prognosis therapies being about seven-fold more likely to work than poor prognosis therapies. The literature is, however, complex, and cannot be intelligently interpreted without an understanding of Bayes' Theorem (a statistical concept) and a host of biological and pharmacological concepts. Good review papers exist (e.g. 15 - 19) and are increasingly appreciated, understood, and applied by private sector and European clinicians and scientists. This literature is not understood by many NCI investigators and by NCI-funded university investigators, who have been satisfied to pursue their own failed paradigms. A cynic might claim that private sector clinicians and scientists like me are out to make money from promoting this testing; in my situation, this shoe doesn't fit, but, even if it did, an honest cynic would admit that the existing, failed paradigms support thousands of professional careers and hundreds of universities and cancer centers. In a big industry like cancer, there is no such thing as a truly unbiased opinion. Caveat emptor.