Cell Culture Drug Resistance Testing (CCDRT) Cell Death Assays:
Misconceptions Versus Objective Data
Reasons why ostensibly brilliant people, such as NCI-Designated Cancer Center Directors and NCI Clinical Scientists, know so little about these tests, along with a brief history of cancer "culture and sensitivity" testing.
Reason # 1 The success of the empiric drug selection paradigm in childhood leukemia and the unwillingness to acknowledge the failure of this paradigm in virtually all other forms of cancer.
This was discussed previously (see Modern Empiric Therapy Paradigm).
Reason # 2 If there are no (or not many) good drugs, then there is no big need for a test to select these drugs.
Cancer chemotherapy came into its own in the early 1950s. From the very beginning, scientists (Black and Speer were the real pioneers) tried to develop tests to find the best drugs for individual patients. The tests really did seem to work, but they weren't perfect and there weren't all that many drugs to choose from, anyway. No one got very interested, except for the Japanese, who dabbled with variations of the original Black and Speer test over the next 3 decades. From time to time, a different technology was introduced. One of the great overlooked achievements in cancer research was the brilliant work of a scientist named R. Schrek in the 1960s. In his published work of 25 - 30 years ago, there are obvious clues to a practicable testing method which should have been developed, long ago, to improve clinical research and drug selection in clinical patients. Dr. Schrek worked at the Hines VA hospital in Chicago, right up to his recent death at the age of 87 just 4 years ago. I will always be sad that Dr. Schrek never got any real recognition at all for his work. But timing is everything, the drugs available in the 1960s were not all that great, and no one knew of the importance of a biological concept now known as apoptosis (discussed later) during the 60s, when Schrek published his work.
Reason # 3 The power of the New England Journal of Medicine
In the late 1970s, a University of Arizona scientist named Dr. Sydney Salmon, along with Ann Hamburger, developed a different type of test, the "Human Tumor Colony Assay (HTCA)." These authors had the great fortune of having their paper published in the New England Journal of Medicine. When the NEJM talks, people listen. One of the early listeners was Dan Von Hoff (now of the U of Texas San Antonio) who was one of my fellow clinical oncology trainees at the NCI. Dan and Syd are two of the most articulate and persuasive people around. With the NEJM as a launching vehicle and their considerable energies and talents, before too long everyone was convinced that the HTCA was going to be one of the great breakthroughs in cancer treatment. My criticism of the cancer research leadership is that it bought on to the concept that the HTCA was the only technology worth studying, despite loud protestations from me that the concept was much too important to be shackled to one specific technology. Rather than carrying out comparative studies of different types of assay technologies, based on different biological endpoints, the NCI and NCI-funded university researchers put all of their eggs in a single basket and did, indeed, shackle the whole concept of culture and sensitivity testing to a single technology. What happened is that the HTCA technology sank like a stone in the minds of academic oncologists and dragged the whole concept down along with it. This occurred most precipitously as a result of a critical NEJM editorial, published in 1983 (the NEJM giveth and the NEJM taketh away). Nearly everyone who had jumped on the HTCA bandwagon just as quickly jumped off, fairly or unfairly.
Reason # 4 The paradigm of disordered cell proliferation as the central defect in cancer
In the late 70s and early 80s, cancer was considered to be a disease in which the primary problem was disordered cell division and proliferation. This concept was strengthened as oncogenes were discovered and the products of these oncogenes were often cellular growth factors and related proteins. With the perceived failure of the HTCA (based on cell proliferation), the only available alternatives were assays based on the concept of cell kill (or cell death). But the prevailing prejudice was that cell death was a crude endpoint in a disease where it was felt that the most important problem was disordered cell proliferation. So the feeling was, if the cell proliferation assays don't work, then surely cell kill/cell death assays also cannot work. So, rather than moving from the cell proliferation endpoint to the cell death endpoint, American universities just dropped the whole field of inquiry. They refused to support clinical trials of cell death assays and they refused to approve grant proposals to study cell death assays.
Reason # 5 Lack of appreciation of disordered cell death as a central defect in cancer
Unfortunately, in the early 80s, virtually no one had a clue of the importance of apoptosis, or programmed cell death, as an important (and perhaps the important) defect in the cancer cell. Virtually no one thought that anticancer drugs may act by triggering apoptosis (or cell death) in susceptible cancer cells. Had these concepts been appreciated at the time, the history of the last 15 years of clinical cancer research would have been far different. But they were not appreciated, and the field of predictive assays was entirely abandoned, except by a tiny cadre of persistent (some would say stubborn) investigators.
The effects of Reasons #1 - #5 were to exclude research and application of cell death assays from the universities and to force these activities into the private sector.
I had personally always taken the "big tent" approach, meaning that we should learn from the clues of all who have worked in this area and should strive to develop and apply whatever systems it takes to get the answers we want, rather than claiming that a particular system is the most theoretically valid approach and should therefore be used exclusively. My own concepts (which were built on the work of others, such as Dr. Schrek, whom I will always acknowledge) have been well documented in the literature of the last 15 years. Heretical in their time, these concepts have now been confirmed to be largely correct, beyond the shadow of a doubt. However, as of the mid-1980s, work in this field had ground almost to a halt in this country. The NCI leadership was doing nothing to encourage work in this field, and peer-review panels were shooting down virtually every proposal to study culture and sensitivity tests on fresh tumors.
What has happened since then is a tribute to the genius of the American private enterprise system. Start-up companies sprouted up, supported by venture capital, private and institutional investors, the sweat equity of their founders, occasional Small Business Innovative Research Grants, and sometimes by personal IRAs, third mortgages, and children's college funds. These start-up companies (some are now more than 10 years old) are successfully achieving what the NCI and American universities could not and would not do - the introduction of these tests into the mainstream practice of American cancer medicine. But it's like something that has sneaked into the house and bitten the oncology opinion leaders from their behind-sides. They weren't expecting it; they didn't see it coming, and they still don't know a lot about it. But they are beginning to learn, as will individual clinicians and clinical investigators, if they are willing to have an open mind and give all of this a fair consideration.