4 VOLUME II
5 (Afternoon Session - November 15, 1999)
10 HUMAN TUMOR ASSAY SYSTEMS
12 HEALTH CARE FINANCING ADMINISTRATION
13 Medicare Coverage Advisory Committee
14 Laboratory & Diagnostic Services Panel
20 November 15 and 16, 1999
22 Sheraton Inner Harbor Hotel
23 Baltimore, Maryland
John H. Ferguson, M.D.
4 Robert L. Murray, M.D.
5 Voting Members
David N. Sundwall, M.D.
6 George G. Klee, M.D., Ph.D.
Paul D. Mintz, M.D.
7 Richard J. Hausner, M.D.
Mary E. Kass, M.D.
8 Cheryl J. Kraft, M.S.
Neysa R. Simmers, M.B.A.
9 John J.S. Brooks, M.D.
Paul M. Fischer, M.D.
Temporary Voting Member
11 Kathy Helzlsouer, M.D.
12 Consumer Representative
Kathryn A. Snow, M.H.A.
14 James (Rod) Barnes, M.B.A.
15 Carrier Medical Director
Bryan Loy, M.D., M.B.A.
Director of Coverage, HCFA
17 Grant Bagley, M.D.
18 Executive Secretary
Katherine Tillman, R.N., M.S.
1 TABLE OF CONTENTS
2 Welcome and Conflict of Interest Statement
Katherine Tillman, R.N., M.A. 5
Opening Remarks & Overview
4 Grant Bagley, M.D. 10
5 Chairman's Remarks
John H. Ferguson, M.D. 28
Brian E. Harvey, M.D., Ph.D. 30
Open Public Comments & Scheduled Commentaries
8 Frank J. Kiesner, J.D. 48
Larry Weisenthal, M.D. 57
9 Randy Stein 92
Richard H. Nalick, M.D. 99
10 William R. Grace, M.D. 108
John P. Fruehauf, M.D., Ph.D. 110
11 James Orr, M.D. 127
Robert M. Hoffman, Ph.D. 131
12 Andrew G. Bosanquet, Ph.D. 136
David Alberts, M.D. 142
13 Robert Nagourney, M.D. 147
David Kern, M.D. 159
14 Daniel F. Hayes, M.D. 168
Bryan Loy, M.D. 178
Open Public Comments & Scheduled Commentaries
18 Edward Sausville, M.D. 201
Harry Handelsman, D.O. 227
19 Harry Burke, M.D., Ph.D. 234
Mitchell I. Burken, M.D. 262
Open Committee Discussion 304
Day One Adjournment 330
1 TABLE OF CONTENTS (Continued)
2 VOLUME III
3 Opening Remarks - Introduction 336
4 Open Committee Discussion 337
5 Motions, Discussions and
1 PANEL PROCEEDINGS
2 (The meeting was called to order at
3 1:16 p.m., Monday, November 15, 1999.
4 DR. FERGUSON: Dr. Sausville?
5 DR. SAUSVILLE: Good afternoon, all.
6 And if I could have the first overhead, this says
7 who I am, and the general topic that I hope
8 you're interested in hearing about this
9 afternoon. Anyway, my task this afternoon is to
10 provide an overview, at least from the
11 perspective of the preclinical therapeutics
12 development program of NCI of antitumor drug
13 sensitivity testing. And I will approach this,
14 therefore, from the standpoint of one who uses
15 tests like this, and indeed, in some cases
16 actually tests that have been used for this
17 purpose for the preclinical selection of drugs
18 for more detailed evaluation, as well as from the
19 perspective of an oncologist who has occasionally
20 thought about using these tests in the treatment
21 of patients. Next.
22 So the basis for this issue in cancer
23 derives directly from the infectious diseases
24 experience, wherein a number of different disease
25 categories, such as tuberculosis, where it's well
1 established that one has to establish that a
2 particular patient's infected bacillus is
3 sensitive to the agents, and a number of
4 non-tuberculosis indications, which would
5 include, for example, pyelonephritis or
6 endocarditis, where it is well established from
7 the standpoint of standard medical practice that
8 such sensitivity tests are that valuable. Next.
9 The assays as applied to cancer ideally
10 would have 95 percent sensitivity and
11 specificity, and short of that goal, would
12 hopefully be better in predicting outcome than
13 the empirical choice of the physician. And the
14 essence of the question from an oncological
15 standpoint, therefore, is whether a particular
16 test conveys information over and above what is
17 implicit in the histologic diagnosis of a
18 patient's tumor. Ideally the test would be
19 biased in favor of detecting sensitivity rather
20 than resisting, for this reason, and ultimately,
21 these tests should be able to demonstrate an
22 impact on ultimate outcome, as opposed to simply
23 response, since in oncology, good outcome begins
24 with the response, it does not end with a
25 response. One ultimately has to have evidence of
1 tangible clinical benefit that changes outcome.
3 So among the specific assays that
4 through the years have been utilized include the
5 by now classical Hamburger Salmon clonogenic
6 assay, wherein tumors that were biopsied for
7 example, were disaggregated, plated in agarose or
8 other solid media after relatively brief
9 exposures to drug, and ultimately colonies
10 counted in 14 days. There have been
11 modifications to this, most notably the capillary
12 tube modification used by Von Hoff and
13 colleagues, and it seems to increase the number
14 of patients for which valuable data are
15 obtained. Modifications of this also include
16 radionuclide based assays, in which radioactive
17 thymidine is added after three days and thus,
18 although it is a soft agar base, one can obtain
19 information after shorter periods of time. And
20 there are also non-agar based assays assessing
21 radionuclide uptake in mass culture. Next.
22 Technical problems with clonogenic
23 assays include a number of artifacts intrinsic to
24 the practice of the assay, including clumping of
25 tumor cells, the potential of growth perturbation
1 from manipulation of potential clonogenic cells,
2 reduced nutrient uptake from nonclonogenic cells,
3 with increase in the size of colonies that grow
4 out in the treated cells. Counting evaluations
5 with a potential large coefficient of variation,
6 and poor cloning efficiencies. And a major
7 limitation in the widespread use of this
8 technique relates to the fact that in many
9 instances, the majority of the specimens are not
10 actually valuable, and there is the inability of
11 this type of assay to score small numbers of
12 resistant cells, which in a clinical scenario are
13 thought to translate new ultimately resistance to
14 therapy, of the sort that is manifest by the
15 subsequent relapse of a patient with drug
16 resistant tumor. Next.
17 In various reviews, actually extending
18 from the initial use of this technique into the
19 early '80s, the cumulative experience is that a
20 relatively small fraction of patients actually
21 have colony growth. And the data that is
22 tabulated here is contained in the references
23 that were indicated. But also, there is the
24 finding that the tests are clearly better at
25 predicting negative or resistant assays, than
1 sensitive assays, such that for example, if one
2 looks at those specimens that were sensitive in
3 vitro as opposed to sensitive in vivo, we have a
4 60 percent true positive, with a range of 47 to
5 71 percent. In contrast to those specimens that
6 were resistant in vitro and resistant in vivo,
7 where there was, as you can see, a 97 percent
8 true negative information. Next.
9 This led to a so-called perspective
10 evaluation of chemotherapy selection utilizing a
11 clonogenic assay, as opposed to the choice of a
12 clinician. And again, this was published by von
13 Hoff and colleagues in 1990 in the Journal of the
14 National Cancer Institute. And in the 133
15 patients randomized in a single agent therapy, of
16 those where the therapy was assigned by a
17 clinician, one had one partial response, and in
18 19 of 68 that were possible to have an assay
19 directed assignment, there were four partial
20 responses. Certainly there was no evidence that
21 this was statistically different and one
22 concluded, or this article concluded, that what
23 one might conceive of potentially a somewhat
24 improved response rate, did not translate into
25 any noticeable effect on survival. And again,
1 approximately a third of the tests could not be
2 evaluated, and there were clearly no evidence of
3 survival in patients either treated according to
4 that which was recommended by the physician, or
5 all patients that were compared versus the test
6 population. Next.
7 Other specific assays which have come
8 to the fore in an effort to meet some of the
9 clear difficulties in the widespread use of the
10 clonogenic assay include the so-called
11 differential standing cytoxicity assay, or DiSC
12 assay, pioneered by Weisenthal and colleagues.
13 And here, one is essentially assessing the effect
14 on whether or not cells remain alive after short
15 periods of culture after exposure to a drug.
16 Thus, either marrow, buffy coat or a tumor
17 suspension after disaggregation, can be treated
18 with drug for anywhere from one hour to four
19 days. Interestingly, the quantification was
20 aided by the addition of so-called duck red blood
21 cells, which are easily distinguishable
22 microscopically, a dye added, and then after a
23 cytospin, one can either assess the dead cells
24 per duck red blood cells, or live cells per duck
25 red blood cells, based on the differential
1 staining of live and dead cells with either
2 fast-green, which stains dead cells, or HD, which
3 stains live cells.
4 Over variants of this approach include
5 the so-called MTT assay, which is a dye that
6 depends for its coloration properties as to
7 whether or not it is reduced by living
8 mitochondria, or a fluorescein assay, where live
9 cells take up a dye, hydrolyze to it in a point
10 that is detected by a change in fluorescence.
11 But all of these techniques, again, don't then
12 depend on the growth out of clonogenic cells, but
13 rather allow a relatively short term exposure to
14 the drug to define whether there is an effect on
15 the viability of the cells. Next.
16 When this assay was, and again, this is
17 in reference to the DiSC assay, was applied
18 initially to hematologic neoplasms, there was
19 clear evidence that there was increased cell
20 survival, that is to say resistance in patients
21 who ultimately were not responsive to
22 chemotherapy that was assigned on the basis of a
23 knowledge of the tests. So in that respect, the
24 assay was certainly suggestive that it might
25 eventually correlate with clinical outcome. And
1 in addition, there was a fairly good
2 correspondence, again, with delineation of true
3 positives and true negatives by this assay.
5 When this assay was applied to the
6 somewhat more difficult clinical category of
7 patients with lung cancer, here in an initial
8 study with non-small cell assay, the DiSC assay
9 was performed assessing sensitivity to ten drugs,
10 treating with a regimen that ultimately
11 incorporated the three most sensitive agents. In
12 this series of 25 patients, there was a 36
13 percent partial response rate with a median
14 duration of 6.5 months and with the, if you want
15 to read, it looks to be responders and a median,
16 or I should say a median survival of about seven
17 months, with an overall of about 12 months.
18 There was clearly a threefold lower assay
19 survival. That is to say, people with greater
20 cell kill in responders versus non-responders.
21 However, these authors concluded that outcome as
22 measured by response rate and survival is within
23 the range reported by the literature, that is to
24 say, even though you can detect this difference,
25 the issue of whether or not it ultimately caused
1 a different outcome that might be afforded by
2 treating with drugs that would be available from
3 the literature and without knowing the patient's
4 histologic diagnosis was not apparent. In
5 addition, some drugs clearly had a much greater
6 discordance in the predictive value of the test.
7 Thus for example, 5 fluorouracil did
8 not seem to have any ultimate value in its
9 performance, and on the other hand, etoposide,
10 behavior to etoposide, was essentially predictive
11 of the behavior of all of the of the drugs. And
12 actually from a scientific perspective, we now
13 recognize that since many of these agents act by
14 inducing apoptosis, this actual result
15 retrospectively, is not that surprising.
16 Interestingly, this paper also
17 introduced the concept of so-called extreme drug
18 resistance. That is to say, you can define
19 patients who had greater than one or more
20 standard deviations resistance than the median in
21 the population, and these patients essentially
22 had zero percent response to any of the agents.
24 This assay was also applied in a study
25 that was recently published from the NIH, and
1 attempted to individualize chemotherapy for
2 patients with non-small cell lung cancer. And
3 from a population of 165 study patients, 21
4 received DiSC based regimens, and these had a 9
5 percent partial response rate. Whereas, 69
6 patients received empiric treatment with
7 etoposide and cisplatin; these had a 14 percent
8 partial response rate. And ultimately, the
9 survival of in vitro best regimen was comparable
10 to what one would have expected from the
11 empirically chosen chemotherapy.
12 Interestingly, this study also revealed
13 an issue that also has to come up in any test in
14 which there is a second or subsequent procedure
15 to obtain tissue, in that the survival of
16 patients who had any in vitro test was actually
17 worse than those without, and this implies
18 potentially that those people that had a
19 sufficient volume of tumor to have the tests had
20 an intrinsically less survival than those that
21 did not. Next.
22 And the last clinical study that I'll
23 touch on also emanated from the NCI and was
24 published in 1997. This attempted to use the
25 DiSC assay in limited stage small cell, and here
1 we turn the somewhat, and consider the use of the
2 test in what may be considered in its most
3 favorable scenario, because this disease which is
4 traditionally, and now actually standardly
5 treated with the combination of radiation therapy
6 and chemotherapy, would potentially treat
7 empirically with a regimen known to produce a
8 high level of response, and then come back after
9 finishing consolidation with radiation
10 sensitivity with either a chosen regimen based on
11 the in vitro sensitivity or a standard approach
12 using an additional three drugs that the patient
13 had not seen previously that would be regarded as
14 standard or part of the standard care of patients
15 with small cell lung cancer.
16 And in this study, there was actually a
17 trend towards somewhat improved survival in
18 patients who could actually receive the in vitro
19 best regimen, but it certainly was just a trend.
20 And most interestingly, of the 54 patients that
21 were entered, the minority of the patients could
22 actually be successfully biopsied in this very,
23 shall we say well coordinated, well resourced
24 clinical trials scenario. Next.
25 So in terms of summarizing what I list
1 here as my own disinterested perspective on
2 whether or not chemosensitivity testing is what
3 one would might consider to be ready to prime
4 time in widespread use, I would offer that from
5 my perspective, no method has emerged as a
6 quote-unquote gold standard, owing to
7 methodologic variation and the definition of what
8 constitutes resistance or sensitive tests. The
9 unfortunate fact that one cannot get reliable
10 data from most if not many patients. And in the
11 few completed prospective or randomized trials,
12 there is little assurance that ultimately there
13 is a difference effected by the test.
14 What we ultimately need if tests of
15 this nature are to be potentially useful, is
16 probably better drugs, because in point of fact,
17 since most of the drugs are unfortunately
18 inactive in many of the diseases in which these
19 tests would be used, knowing that they won't work
20 is not actually terribly valuable.
21 We need a method that is applicable to
22 all specimens obtained in real time with the
23 diagnostic specimen; that is to say, to require a
24 second test, or second procedure, in order to
25 obtain the specimen, inevitably indicating or
1 introduces potential biases in studies related to
2 those patients who could withstand or undergo
3 these procedures, as well as of course, making
4 the test, the performance of the test more costly
5 than one might potentially desire.
6 But on the other hand, I think the
7 future holds potentially with newer approaches,
8 including gene expression arrays, serial analysis
9 of gene expression, there may be better, and
10 hopefully more useful techniques to assess this
11 in the future. But whatever the test, be it some
12 permutation of a currently available test, or one
13 of the newer methodologies here, its ultimate
14 value should be established in prospective
15 randomized trials where one uses the
16 diagnostically guided as opposed to the empirical
17 treatment before assessing whether or not it is
18 openly valuable.
19 And I thank you for your attention.
20 DR. FERGUSON: Thank you, Dr.
21 Sausville. I think you've gone in shorter time
22 than even I asked for, and so I'll open for a
23 question or comment. Yes, Dr. Hoffman?
24 DR. HOFFMAN: Yes. I would like to ask
25 Dr. Sausville his opinion about the assays that
1 were discussed this morning, based on three
2 dimensional culture and other new third
3 generation techniques that address these problems
4 and have shown to be able to assess greater than
5 95 percent of the patients' specimens, have shown
6 survival benefit, have shown very high
7 correlation to response. I would like Dr.
8 Sausville's comments on this morning's talks.
9 DR. SAUSVILLE: Again, I wasn't here
10 this morning, and indeed, my brief was not to
11 comment on specific assays from this morning's
12 activities, but to offer an overview of problems
13 in the field in general. And I would certainly
14 say that if the tests that were proposed this
15 morning seem of interest, the real question is
16 have they been evaluated in prospective
17 randomized studies. Because unless they have
18 not, or I should say until they have, one, and
19 since as far as I'm aware, they have not, it
20 would be, I think premature to conclude that they
21 are, therefore, of widespread general use.
22 DR. FERGUSON: Dr. Weisenthal?
23 DR. WEISENTHAL: Now would be as good a
24 time as any to address the issue of the
25 requirement of prospective randomized trials for
1 acceptance of this technology. I think it's a
2 very important issue, several speakers have
3 raised it, and the issue is this: Should these
4 tests be used in clinical medicine until it has
5 been established in prospective randomized trials
6 that patients treated on the basis of assay
7 result have a superior therapeutic outcome to
8 patients treated without the assay result? The
9 cop-out way to answer this, which I'm not, this
10 is not my answer to it, but what I could say if I
11 wanted to cop out, and it's perfectly valid, is
12 that never has the bar been raised so high for
13 any diagnostic test in history.
14 Dr. Sausville began his talk by
15 pointing out bacterial cultures done in
16 sensitivity testing, including one of his
17 examples was serum bactericidal testing. Serum
18 bactericidal testing, for those of you who may
19 know it, is something that Medicare does
20 reimburse for. It's very controversial, it's
21 much more controversial actually than cell
22 culture drug resistance testing. The performance
23 characteristics are certainly inferior based on
24 sensitivity and specificity. And furthermore,
25 there has certainly never been a prospective
1 randomized trial showing survival advantage or
2 therapeutic outcome, you know, higher cure rate
3 or anything, whether you use serum bactericidal
4 testing or not, or any other form of antibiotic
5 sensitivity test.
6 We're talking about laboratory tests,
7 not a therapeutic agent, and I think that one
8 would be advised, at least first of all, to judge
9 them on the basis of the way that other
10 laboratory tests have been judged, and that is,
11 do they have acceptable accuracy, sensitivity and
13 However, moving on to the question of
14 the prospective randomized trial, all of us, no
15 one more than those of us who have been working
16 in this field for 20 years, would love to see
17 prospective randomized trials, physician's choice
18 therapy versus assay directed therapy. This has
19 been the Holy Grail. I hope before I die, I will
20 be able to participate in such a trial. I
21 mentioned earlier, the fact is that there have
22 been a lot of energetic, very talented people,
23 that have devoted their careers to this, and the
24 best example is Dr. Dan von Hoff, who is the most
25 energetic. He and I were clinical oncology
1 trainees together at the National Cancer
2 Institute. We both started working in this field
3 in the same lab at the same time. And Dan had --
4 you know, my CV lists about 50 publications;
5 Dan's CV is probably closing in on 2,000. And
6 he's organized more prospective randomized trials
7 and things like that. He was unable to
8 successfully get a study initiated and patients
9 accrued, and completed. I have devoted enormous
10 amounts of effort to getting those trial done,
11 and for one reason or another, they didn't accrue
12 patients, and things like this.
13 I want to point out that in Medicare,
14 Medicare has a problem, and the problem is not in
15 the year 2000, the years between 2010 and 2015.
16 The budgetary crunch in Medicare is going to come
17 in 2010 and 2015 when those of use who are now 52
18 years old are going to be 60, 65, 75 years old,
19 and we're going to be getting cancer. What's
20 going to happen over the next ten years is that
21 there's going to be an ever increasing array of
22 partially effective and very expensive cancer
23 treatments. We're seeing that now. Drugs are
24 being approved at a very rapid pace. We don't
25 have a clue how to use them.
1 We brought up the idea about using the
2 test as a litmus test, like should you pay for
3 the therapy. Well, the only way that you're
4 going to be able to ever use the test as a litmus
5 test is if you do the prospective randomized.
6 And I would submit to you that the way to get the
7 prospective randomized trials done is as
8 follows: Look at the data that you heard about
9 this morning. Surely, you must be convinced that
10 there is a germ of truth in this. You know,
11 there is a consistent, overwhelming, and I think
12 that study after study is showing that these
13 tests do predict, they can identify the
14 difference between good treatments and bad
15 treatments. So it is not much of a leap of faith
16 to say that if only someone could do the trials,
17 then there's a good chance that they would turn
18 out to be positive, and if they do turn out to be
19 positive, by the year 2010, we will have a
20 wonderful tool to triage therapy, to triage
21 patients, right at the time when Medicare most
22 needs it, when the budgetary crunch comes, when
23 we've got all these expensive cancer therapies.
24 You know, I gave you the example of the five
25 patients treated with the bone marrow transplants
1 at $200,000 a patient, who did not benefit from
2 that, who then got an assay and had a great
3 result. What if they had gotten the assay? It
4 has the potential to be enormously cost
5 effective. But the only way that it will be used
6 in that way is if you do the trials, but the only
7 way -- it's a catch 22 -- the only trials will
8 ever get done -- I personally believe that if
9 Medicare approves this, it will be the shot heard
10 round the world, Swann, ECOG, CLGB, they will be
11 lining up to do trials. You guys, you know, come
12 back and maybe approve it conditionally, come
13 back in five years and see what's happening.
14 DR. BAGLEY: Well, you know, today --
15 it brings up an interesting point, and I think
16 you bring up the comment that, you know, never
17 has the bar been this high. Well, I would take
18 exception to that. I think the bar is not any
19 higher for this than for anything else that we
20 are currently looking at. And it is not
21 something that we are not used to hearing for
22 other things too, and that is, gee, we're paying
23 for things that were never subjected to any
24 scrutiny, so why should we subject it to
25 scrutiny. I mean, that's -- we hear it all the
1 time, and that's just not going to work in this
2 day of evidence based medicine. And I'll tell
3 you because, you know, how much it costs isn't an
4 issue that we're really here to talk about today.
5 Because, you know, two years HCFA reorganized
6 and changed the whole focus of coverage, moved
7 the coverage office away from that part of HCFA
8 that pays for things and looks at program
9 integrity, and moved it into the place at HCFA
10 that looks at quality and clinical standards.
11 And that's exactly the focus we ought
12 to be doing because, you know, what it boils down
13 to is, it's not just why not pay for it, it
14 doesn't cost that much, or it might save a little
15 money. But it's let's pay for it because it's
16 the right thing to do, and represents quality
17 medicine. And when that happens, you know, we
18 shouldn't just pay for it, we should pay for it,
19 we should promote it, and perhaps, if the
20 evidence is there, we ought to insist on it. I
21 don't think the clinical community or the
22 beneficiary community would tolerate us insisting
23 upon a pattern of behavior, or even promoting a
24 pattern of behavior, without evidence, and so why
25 should we pay for it without evidence? And
1 that's the change we're trying to make, that's
2 been the whole point of changing the coverage
3 process, putting together advisory committees
4 like this, is to say, let's look at evidence and
5 let's make decisions about what we pay for based
6 on quality, and once we know what quality is,
7 let's not just pay for it, let's not stop there,
8 but let's pay for things that we are willing to
9 promote and perhaps even insist upon. And so,
10 that is the reason for the focus on evidence, and
11 it's going to be there. And the fact that we may
12 not have subjected past technologies to the same
13 evidence, doesn't mean we can't go back and look
14 at them, time willing, but it doesn't mean we
15 should lower the bar for new technologies.
16 DR. FERGUSON: Do you want to respond a
17 minute, Dr. Sausville?
18 DR. SAUSVILLE: Yes, I do wish to
19 respond to that. And I want to thank you for
20 that perspective, because clearly, there is
21 nothing that ever, that doesn't lack for good
22 intentions. Clearly, the desire to convey useful
23 patient benefit goes without question. And the
24 efforts that were cited over the past two decades
25 have really been enormous efforts in that regard.
1 But one distinction that I must point out is
2 that when one considers the bacteriologic
3 analogy, the diagnostic specimen, that is to say
4 the bacteria growing in a bottle, equals the test
5 specimen. So that is one intrinsic difference.
6 In many cases, cancer related
7 sensitivity testing requires additional efforts
8 to get and process tissue different than the
9 routine. So it is a point where the analogy is
10 not exactly apt, I think. And you quoted the
11 endocarditis issue, and you're right. It is
12 controversial as to whether or not ultimately
13 sensitivity testing is beneficial, because among
14 the lethal consequences of endocarditis are a
15 series of almost anatomical problems, valve
16 problems, thrombi, et cetera, that are not in any
17 way predicted or dealt with by the sensitivity
18 testing. So again, it's -- I think that the two
19 are, recall each other, but have important
20 differences in thinking about the ultimate value
21 of the tests.
22 DR. FERGUSON: Dr. Sundwall?
23 DR. SUNDWALL: Just a quick question.
24 Dr. Sausville, I am a family physician, not an
25 oncologist, but I was very perplexed by your
1 statement. If I heard it correctly, you said,
2 knowing what drugs won't work is not all that
3 helpful. I don't understand that, given the
4 morbidity and the difficulties with
5 chemotherapeutic agents. I've had many patients
6 suffer terribly from chemotherapy, and how can
7 you say not knowing what won't work isn't that
9 DR. SAUSVILLE: Because the context in
10 which -- and I respect your point, and I don't
11 certainly mean to in any way imply a lot of sole
12 searching on both the part of physicians and
13 patients that goes into the decision to entertain
14 therapy. But in oncology, frequently the
15 treatment is driven by the histologic diagnosis,
16 so if for example the initial diagnosis of small
17 cell lung cancer, if one could have a pattern of
18 drugs that have more or less a susceptibility, I
19 am not aware that such tests would be considered
20 definitive in saying, well, because you happen to
21 have a resistant small cell lung cancer, you
22 should not receive any therapy. So in that case
23 the therapy, or choice of therapy, is ultimately
24 driven by the histologic diagnosis that's
25 apparent. Consider the opposite point. Somebody
1 with a chemotherapy refractory neoplasm,
2 manifested, such as pancreatic or renal, which
3 are problems which as far as I'm aware, are not
4 considered responsive to any set of agents
6 Again, the information of whether the
7 patient has that dire situation is implicit in
8 the histology. It's not clear that any tests
9 that can be done ultimately defines a drug that
10 can change the outcome that is at the present
11 time ordained by the histology. So I take your
12 point, that being able to reliably choose drugs
13 that convey a useful clinical benefit is very
14 worthwhile and a goal that should be pursued. I
15 am not sure that the current tests actually allow
16 the clear delineation of such agents.
17 And in that regard, you can tell a
18 patient who has the unfortunate diagnosis of
19 pancreatic cancer, that they are likely not going
20 to respond to a medicine chosen on that basis, or
21 chosen after having gone through an additional
22 test to obtain tissue and then tested for assay
24 MR. KIESNER: I think Dr. Sundwall
25 asked a very interesting question, and I think
1 there are at least two clinical strategies for
2 using this type of information. I think on one
3 hand you can say, we're going to select a drug,
4 and another, there may be a different clinical
5 setting, and I will give you two examples. I
6 think this is very important.
7 Dr. Alberts spoke this morning about a
8 clinical situation where he would be referred a
9 patient from another hospital, and that patient
10 may not, may be unaffected by the primary care,
11 he has relapsed, the tumor is growing, and they
12 send him, they send the patient to him. Doctor,
13 what can you do to help me? In that situation,
14 there may be three or four or five different
15 drugs, single agents, none of which have been
16 determined to have a significant clinical benefit
17 over the other drug in that situation. If I am a
18 patient and if any physician can tell me of the
19 five drugs, Frank, two of those drugs you're
20 resistant to, what has he told me? He's said,
21 I'm not going to use those two drugs, I've saved
22 you from the possibility that you're going to get
23 those two drugs and not benefit from it. It's
24 very, very well documented that these tests are
25 able to identify resistance, and if I'm a patient
1 and if my physician in that setting can identify
2 the resistance, I believe he has done me a real
4 The second situation is one which I
5 experienced personally. And I'm not mentioning
6 it because it's personal, I'm mentioning it
7 because it's exemplary of the position that a lot
8 of families can be in relation to elderly
9 Medicare patients. In -- I'm from Minneapolis.
10 My father was in St. Mary's Hospital. He was ten
11 years past Medicare age and was being treated for
12 cancer. We saw what the drug was doing to him.
13 If his physician could have come to me and said,
14 Frank, I have two or three other drugs, two or
15 three other choices I could try, and I have done
16 a test and I could see that they are all
17 resistant, I don't think we should go any
18 further. From the family situation, it was a
19 very difficult situation to make, do you go
20 further. We made the situation not to. But to
21 this date, if I would have had an assay that
22 would have told me the drugs that the physician
23 was considering will not work, I would feel I
24 would have been served, our family would have
25 been served, and my father would have been
1 served. Elimination of drugs, identification of
2 drugs in those types of clinical settings that
3 don't help, or help you stop therapy, I think is
4 something worth considering.
5 DR. FERGUSON: Thank you. Very
7 DR. SAUSVILLE: So my response to that
8 is the essence of the issue, and it also pertains
9 to the question before, is whether or not one
10 could have reached the conclusion that drugs
11 would not have benefitted your relative by the
12 diagnosis itself, and not have ultimately had to
13 rely on a test. And here the performance
14 properties of the -- the unfortunate performance
15 properties that when a drug is predicted to be
16 sensitive by these tests, the outcome is
17 unfortunately not any different in many cases,
18 than when things -- and in fact, in all cases
19 that I'm aware -- of when drugs are seen as
20 resistant, is the essence of why we are in a
21 quandary about how to appropriately use this.
22 DR. FERGUSON: Thank you. Harry.
23 Dr. Handelsman?
24 DR. HANDELSMAN: I'm Harry Handelsman.
25 I'm at the Center for Practice and Technology
1 Assessment, Agency for Health Care Policy and
2 Research, and our office was asked by HCFA to
3 review the 1990 article by Kern and Weisenthal on
4 the use of suprapharmacologic drug exposures.
5 And I'm going to briefly synthesize what I think
6 was the essence of that article and then give my
7 personal critique. Unfortunately, some of this
8 is going to be repeating some of the data that
9 you heard earlier today, and that's unavoidable.
10 Bayes' theorem suggests that drug
11 sensitivity testing in vitro will be accurate in
12 predicting clinical drug resistance in tumors
13 with high overall response rates only if the
14 assays have a specificity of greater than 98
15 percent for drug resistance. A 1989 review of
16 the literature by the authors indicated that a 30
17 to 50 percent false positive rate, and a false
18 negative rate as high as 15 percent.
19 This reported assay, which was
20 developed by Kern, uses a soft agar culture with
21 products of concentration times time higher than
22 those which can be achieved clinically and used
23 drug exposures 100-fold higher than other
24 contemporaneous studies. Response assessments
25 were made by retrospective and blinded chart
1 reviews. The authors reviewed 450 correlations
2 between assay results and clinical response over
3 an eight-year period. The assay was calibrated
4 to produce extremely high specificity for drug
5 resistance. Two assay end points were used,
6 colony formation and thymidine incorporation.
7 Overall response rates were 28 percent
8 using the colony formation end point, and 34
9 percent using the thymidine incorporation end
10 point. At the assay lower cutoff value, the
11 assay was 99 percent specific in identifying
12 non-responders, fulfilling the Bayes prediction.
13 Patients with drug resistant tumors could be
14 accurately identified in otherwise highly
15 responsive patient cohorts. The demonstration
16 that the post-test response probabilities of
17 patients varied according to assay results in
18 pretest response probabilities allowed the
19 construction of a nomogram for predicting
20 probability of response.
21 In 1976, it appeared that no method of
22 predictive testing had gained general acceptance,
23 and during the subsequent decade, high false
24 positive and false negative rates continued to
25 plague the field of in vitro testing.
1 The clinical advantages of developing a
2 highly specific drug resistant assay include:
3 The avoidance of the use of inactive agents in
4 treating responsive tumors; the avoidance of drug
5 related morbidity of inactive agents; the
6 identification of drug resistant tumors for
7 timely consideration of alternative therapies;
8 and obviously, the cost savings of avoiding the
9 use of ineffective agents. Alternative assay
10 methods are available. However, the use of cell
11 culture assay had the advantage of measuring the
12 net effect of both known and unknown mechanisms
13 involved in drug resistance.
14 It is indeed possible to estimate the
15 post-test response probability for specific drugs
16 in specific tumors and patients. This can be
17 achieved through the determination of assay
18 results and the application of a constructed
19 nomogram for assay predicted probability of
21 In general, efficacy studies in both in
22 vitro and in vivo tumor models provide an
23 opportunity to obtain data on both efficacy and
24 toxicity, and to refine dose and schedule
25 information for clinical trials. In vitro
1 testing has been extensively applied to determine
2 the potential efficacy of individual drugs and
3 remains an attractive alternative to testing
4 empiric regimens in phase I and phase II clinical
5 trials. In vitro testing can differentiate
6 active and inactive agents, but cannot serve as a
7 substitute for in vivo studies, despite providing
8 elements of both positive and negative predictive
10 Combinations of agents, which are the
11 most widely applied treatment strategy, are best
12 evaluated using in vivo models, where both
13 toxicity and pharmacokinetics can be adequately
14 studied. Although in vitro assays can provide
15 primary drug resistance date, the most relevant
16 outcome from such assays is improved patient
17 survival, and there have been no clinical trials
18 demonstrating such a result. In addition, it
19 remains to be determined if in vitro testing will
20 be found to have direct clinical applications for
21 disease or patient specific therapies. There
22 have been encouraging reports of survival
23 advantage of patients treated with in vitro
24 directed therapies, but these require
25 confirmation from larger numbers of patients and
1 variety of tumors.
2 Both randomized and non-randomized
3 studies comparing tumor responses to chemotherapy
4 selected by in vitro testing with empirical
5 chemotherapy have produced conflicting results.
6 Response rates appear to be better with in vivo
7 selective agents. However, the impact on
8 survival has not been adequately addressed.
9 Ideally, in vitro assays should be correlated
10 with both response and survival data. The most
11 significant issue in the realm of cancer
12 chemotherapy is that of the resistant
13 mechanisms. The ability to identify ineffective
14 agents in these assays, albeit potentially
15 important, does little to elucidate the
16 mechanisms problem. The assay described in this
17 article can perform its intended task of
18 identifying resistant tumors, and determining a
19 probability of response, but its clinical utility
20 has not been established.
21 DR. FERGUSON: Thank you. I think we
22 have time for a few questions. Panel, or
23 others? Comments? Yes?
24 DR. FRUEHAUF: I think that was a nice
25 summary of the paper and I think the issue of
1 survival was addressed this morning.
2 DR. HANDELSMAN: Excuse me, if I can
3 interrupt. The issue of survival was predicted,
4 but it wasn't on a comparison with alternative
6 DR. FRUEHAUF: That's true. It was
7 survival in a blinded prospective way, looking at
8 people just getting empirical therapy and asking
9 the question, if you're not going to respond,
10 will you have inferior survival? And we've
11 addressed the issue of, do you want to get a
12 drug, as a person who has cancer, that won't work
13 and won't benefit your survival? And I think
14 that's the important point that this paper is
15 establishing, the utility of knowing that a drug
16 will not be of benefit to a cancer patient.
17 And I have dealt with neuropathies that
18 are incapacitating to professional tennis
19 players. I have had to deal with all sorts of
20 toxicities, and many of these people progress
21 through therapy and die of their disease, and the
22 quality of their life during that progression was
23 significantly adversely affected by getting
24 ineffective therapy. So the clinical utility
25 question to me as a practicing physician, is to
1 not harm people with ineffective therapy that
2 will not, which has been demonstrated not to
3 benefit their survival. And I think most people
4 understand, if you don't respond to the therapy,
5 you're not going to live longer. And we're not
6 trying to say that the test will predict a drug
7 that will help people live longer, for drug
8 resistance assays. We are trying to say, and you
9 stated, and Dr. Sausville stated, that these
10 assays accurately predict drugs that will not be
11 effective. And then the question is, so what? I
12 think the answer to the question so what is, so I
13 don't want to give those drugs to my patient.
14 DR. FERGUSON: Thank you. Okay. I
15 guess we can go on. Harry, thank you very much.
16 Dr. Burke?
17 DR. BURKE: I think we're going to have
18 a lot of fun this afternoon. My name is Harry
19 Burke. I'm a consultant to HCFA. I'm an
20 internist. I'm a methodologist. I'm only here
21 for today.
22 The first couple slides that I am going
23 to present are not HCFA's position, they're my
24 personal review on the subject, and shouldn't be
25 considered HCFA's policy. I am going to address
1 three issues today. First, the levels of
2 evidence, which has been raised several times by
3 various speakers. I'm going to talk about test
4 accuracy. And then I am going to talk about the
5 Kern and Weisenthal article that Dr. Handelsman
6 just gave us an introduction to.
7 But before that, I would like to make a
8 couple comments. First, the extreme drug
9 resistance is really a therapy specific
10 prognostic factor. It really has to be looked at
11 in the context of other therapy specific
12 prognostic factors. Dan Hayes was right. It's
13 like ER and PR, and these other factors. And
14 there's a scientific rationale underlying therapy
15 specific prognostic factors that must be dealt
16 with. The utility of the test depends on the
17 characteristics of the test; that's clear. But
18 it also depends on the efficacy of the treatments
19 if it's a therapy specific prognostic factor,
20 because they're inextricably linked together, and
21 you can't separate the two. And it depends on
22 the prevalence of the disease or the resistance
23 in the population under study. So it's really
24 those three factors together that must be taken
25 into consideration when looking at something like
2 Let me make another point, and that is,
3 when we talk about the utility of this test, we
4 can't be talking about the utility for individual
5 patients. We have to be talking about the
6 utility of the test for a population of
7 patients. So we can't switch back and forth
8 between the two, because we're really mixing
9 apples and oranges when we do that.
10 I'd like to make a couple comments
11 about what has been said earlier. Fruehauf,
12 Weisenthal and others have suggested consistent
13 findings across studies, and they've made a claim
14 that that proves something. And I would like to
15 suggest that, yes, consistent findings across
16 studies can be due to robustness of the
17 underlying phenomenon. But it can also be due to
18 consistent biases across the studies. And so if
19 you are going to make a claim for consistency of
20 35 studies, that all suggest the same thing, and
21 that's a robustness claim, you'd better be
22 prepared to tell me why it isn't due to biases in
23 the 35 studies themselves. So you really have to
24 look at each of the 35 studies and you have to
25 ask the question, are these really consistent?
1 You can't just wave a hand and point to 35
2 studies. You have to rule out the alternative
4 Secondly, I'm a little confused.
5 Kiesner pointed out that we could use this task
6 at the bedside at the discretion of the patient,
7 or I mean of the physician, while Kern suggested
8 that it could be used to deny a particular drug.
9 And I need to know, which is it? Is it that the
10 evidence is so convincing that it can be used to
11 deny a particular therapy, or that it's not that
12 convincing, and it's just one of an armamentarium
13 of tests that are available to the physician.
14 First, let me just do a little
15 background. Comparative clinical benefit is what
16 I'm looking at. This is my gloss on reasonable
17 and necessary. It could be defined as the test
18 or treatment providing a measurable improvement
19 over all the current relevant tests and
20 treatments at a cost commensurate with the
21 measured improvement. I also suggest that FDA
22 approval is prima facie evidence of safety and
23 efficacy, but if that isn't there, I think safety
24 and efficacy must also be demonstrated. And a
25 comparative clinical benefit study of a
1 prospective, or of the test or treatment, must
2 compare itself to the other tests or treatments.
3 It doesn't stand alone. And so when you say
4 well, this test or treatment is really good, you
5 have to say what the other tests and treatments
6 are, what you're comparing it to.
7 Now I would -- I am not totally a
8 believer in randomized clinical trials for
9 everything. My suggestion is that there may be
10 three levels of evidence that can be adduced: A
11 strong evidence, which is either a large
12 prospective randomized clinical trial, or two
13 large retrospective studies where one study
14 independently replicates the other study. I
15 think that's good science as well. Or two medium
16 size randomized prospective trials. I think all
17 of those would be strong evidence. If I saw a
18 really large retrospective study that was
19 independently replicated by independent
20 investigators, independent institutions, I would
21 take that as fairly strong evidence.
22 Moderate strength are medium sized
23 prospective trials, a large well designed
24 retrospective study that hasn't been replicated,
25 or two medium randomized prospective clinical
1 trials, medium size.
2 Weak evidence. Small properly designed
3 and implemented prospective randomized trials, I
4 think are weak evidence, and I think are well
5 recognized as that, and I think Pito and others
6 have suggested meta-analysis to overcome the
7 weaknesses of small randomized clinical trials.
8 Alternatively, two medium sized retrospective
9 studies that were done by independent
10 investigators might be good evidence.
11 But insufficient evidence, small
12 systematic studies, I consider them really
13 exploratory rather than evidence. Case series, I
14 think are well considered as anecdotal. And any
15 study that's not properly designed, implemented
16 or analyzed must be considered fatally flawed.
17 Large is 500 patients, medium sized,
18 250, small is less than 250. You know.
19 Okay. Test accuracy. What is a
20 properly designed, implemented and analyzed
21 study? Well, test accuracy of course is an
22 association between each patient's predictions
23 based on the test, and each patient's true
24 outcome. That's test accuracy. The factors that
25 affect test accuracy include the study
1 population, were the patients who were selected
2 easy to predict. Because you can select patient
3 populations, and we'll get into that later, that
4 are very easy to predict by just about any test.
5 The test characteristics: Was the test assessed
6 in the clinical setting which it's intended to be
7 used for? The reproducibility: Does the
8 prediction variability increase across
9 laboratories and reagents? And finally, the
10 method of measuring the accuracy: Was the
11 correct method used?
12 And I'm going to focus on two of these,
13 the first and fourth, which are the most
15 Okay. Sample size, or study sample
16 characteristics. The composition of the study,
17 the study population, makes a difference in the
18 observed accuracy. A sample with only extreme
19 cases, i.e., the predictors are extreme values of
20 their range, will be easier to predict than a
21 sample with many intermediate cases, the
22 predictions are mostly in the middle of their
23 range. For example, for women with breast cancer
24 who have many positive lymph nodes, their
25 outcomes are fairly easy to predict. Women with
1 metastatic disease, their outcomes are pretty
2 easy to predict. It isn't a hard task to do.
3 What is hard to do is to predict the women with
4 small tumors and with no lymph node involvement
5 or metastatic involvement, that's really tough to
6 do. So, if you just pick an extreme population,
7 it turns out those are pretty easy predictions to
8 make, but it turns out that most patients aren't
9 in the extreme, so it's relatively unuseful.
10 Okay? Thus, the sample must be representative of
11 the real world in which the test is to be used.
12 Measurement of test accuracy. There
13 are several ways to assess test accuracy. The
14 correctness of the accuracy assessment method
15 depends on, so when you select a method of test
16 accuracy, whether there is a preexisting
17 threshold, in other words, is there something out
18 there that says everybody above this should be
19 positive, everybody below should be negative,
20 does that already exist, or do you have to
21 construct it? The number of tests to be
22 assessed. And whether the assessment is
23 performed on one population or more than one
25 And just very briefly, this is really
1 hard to read. I can't get the lines on tables to
2 work out for me, so this is lineless. But it
3 turns out that the sensitivity and specificity
4 pairs are really, have one threshold, they do one
5 test, and its one population. Okay? Positive
6 and negative predicted value, there is one
7 threshold, one test, and two or more populations,
8 because really, the positive and negative
9 predictive value we're talking about are
10 different prevalences, therefore, different
11 populations. And the area under the receiver
12 operating characteristic includes all thresholds,
13 two or more tests are assessed, and one
14 population. So in other words, we use the ROC as
15 a best unbiased measure of test accuracy.
16 In terms of the measures of accuracy
17 discussed above, without changing the test
18 itself, there are only two ways to change the
19 accuracy of the test. One way, of course, is to
20 change the threshold of the predictions, and then
21 your sensitivity and specificity would change.
22 And the other way is to change the prevalence of
23 the disease in the population, because then your
24 negative and predicted -- positive and negative
25 predicted values will change. Okay?
1 Prevalence's effect on accuracy. The
2 optimal prevalence for assessing the accuracy of
3 the test is to use a population composed of 50
4 percent disease, 50 percent unaffected. In this
5 situation, the prevalence itself provides no
6 advantage to the test. As the prevalence departs
7 from 50-50, the impact of predicting the
8 prevalence becomes more prominent. In other
9 words, if the test acted as a naive Bayesian
10 classifier, then for each patient it would always
11 predict the most frequent outcome, in other
12 words, it would predict the prevalence. So for
13 example, if there was a 90 percent prevalence in
14 a diseased population, then the naive Bayesian
15 classifier would say disease every single time
16 for every single patient, and you would be right
17 90 percent of the time. That's pretty good.
18 Okay? That's a pretty accurate approach. As the
19 proportion of patients with or without the event
20 moves, either toward a hundred percent or zero,
21 the naive Bayesian approach becomes more
22 effective, more efficient in its predictions. So
23 it's only at 50-50 for binary outcome, that you
24 neutralize the naive Bayesian classifier
25 approach. In other words, if the true prevalence
1 of the disease in a population is close to a
2 hundred percent, it's almost possible for a test
3 to add predictive information. Okay? That's
4 really an important idea. So, as you get towards
5 high prevalences, almost no test will be helpful
6 anymore. Okay?
7 Changes in the prevalence of the
8 disease in a population, as reflected by
9 corresponding changes in the test's positive and
10 negative predictive values. If one were allowed
11 to report the positive predictive value, or the
12 negative predictive value of tests just by
13 itself, then one might be tempted to create or
14 select a high prevalence population for
15 assessment of the test, because the test would
16 appear to possess a high predictive accuracy,
17 okay? Until of course, it was compared to the
18 naive Bayesian classifier, at which point it
19 would cease. Thus, both the positive and
20 negative predictive values of the test must be
21 assessed. Then, if the prevalence is not 50
22 percent, the test must be compared to the naive
23 Bayesian classifier. Further, both the
24 sensitivity and specificity of the test must be
25 assessed in terms of the cutoff that was
1 selected, and the prevalence, because it turns
2 out that although it's commonly thought that
3 prevalence doesn't affect sensitivity and
4 specificity, it certainly does, and there are a
5 number of papers that demonstrate that.
6 So, a better way to assess the accuracy
7 of the test is to use the ROC. This measure of
8 accuracy is impervious to changes in prevalence
9 and reflects the characteristics of the test
10 across all sensitivity and specificity pairs.
11 Well, okay. So now, I was asked to
12 take a peek at Kern and Weisenthal's paper as
13 well, and it turns out that they really are very
14 sophisticated in their use of data and results.
15 It's probably one of the most sophisticated
16 papers I've ever read, and I have read quite a
17 few. I'm going to talk about those areas of the
19 Overview of the study. Kern and
20 Weisenthal used two in vitro tests, which have
21 been mentioned, as surrogate outcomes for
22 response to chemotherapy in patients with
23 different types of cancer. If a patient's tumor
24 demonstrated drug resistance in a test, i.e.,
25 after the patient's tumor cells were exposed to
1 the drugs for a certain period of time, and the
2 cells did not achieve a threshold inhibition, the
3 test was interpreted as predicting that the
4 patient would not clinically benefit from
5 receiving the drug.
6 So, we go back to our levels of
7 evidence, and we can ask overall about this
8 study, where it would lie in our levels of
9 evidence? Well, the colony formation test is
10 really Level III, it's weak evidence. The
11 thymidine incorporation is really Level IV,
12 insufficient evidence.
13 But, not letting that bother us too
14 much, let's talk about the study itself. It's a
15 retrospective chart review, subject to several
16 biases, including therapy selection bias, who
17 received the therapy, and study selection bias,
18 which patients were included in the study. And
19 the study was not validated on an independent
20 population, but it was done on the same
21 population. It was done from 1980 to 1987 in the
22 United States.
23 The study characteristics. Initial
24 population was 5,059 patients. From that, they
25 winnowed it down to 450 patients that they
1 actually studied, about 9 percent of the initial
2 population. They looked at eight different types
3 of cancer. They had 332 colony formation
4 patients, 116 thymidine incorporation patients.
5 And the non-respondent prevalence was 71 percent
6 of the population.
7 One thing that the study wasn't very
8 clear about, it said that virtually all patients
9 were treated with standard chemotherapy, but then
10 later on it said, most of the patients whose
11 specimens were analyzed did not receive
12 chemotherapy because they underwent curative
13 surgical procedures. And I didn't understand
14 that distinction.
15 We'll assume that all 450 patients in
16 the study received chemotherapy. The percentage
17 of patients who receive chemotherapy today may
18 actually be much higher. The criteria to decide
19 which patients received chemotherapy is not
20 reliable. This is really not a very acceptable
21 approach to a study. If in fact you're going to
22 predict who's going to respond to chemotherapy, I
23 think you really have to say how chemotherapy was
24 selected, what the selection criteria were.
25 Also not provided were the patient
1 characteristics of the study population, and this
2 is really critical information. For example, if
3 the population was composed of patients who had
4 already received primary chemotherapy, had
5 incurable disease, and were undergoing salvage
6 treatment, then this study would not be
7 applicable today, and in addition, the results
8 would be biased. So we really need to know what
9 the chemotherapy selection criteria were, and
10 what the patient population characteristics were,
11 neither of which are provided to us. There is no
12 basis from which to understand the results that
13 we are seeing.
14 Now, the function of the test is to
15 predict clinical non-response to chemotherapy
16 using suprapharmacologic drug doses. Now, we're
17 interested in the non-response rate per drug per
18 cancer type per test type. That's what we're
19 interested in. So there were eight drugs. Now
20 I'm not going to talk about combination therapy
21 because that's a whole other subject. There are
22 eight drugs, eight cancer types, that means there
23 were 64 bins, okay? So that means per cancer,
24 per treatment, so there were 64 of those
25 combinations for each of the two tests, for a
1 total of 128 accuracy assessments. And excuse,
2 the lines aren't there, but you see 64 bins. And
3 for each bin, you would want to know
4 prospectively, hopefully randomized, you would
5 want to know, for disease one, treatment one,
6 what does the test say, okay, about this
7 population? In that one cell. And then you
8 would want to follow that population over time
9 and see what actually happened to those people.
10 So for breast cancer and a particular
11 chemotherapeutic agent, you would like to see,
12 did the test predict for that chemotherapeutic
13 agent for breast cancer, successfully. And you'd
14 want to do that for each of the 64 cells. And in
15 fact, you must do it for each of the 64 cells.
16 If there were the same number of
17 patients per cancer type, then the 118 patients
18 tested for thymidine incorporation would be at
19 1.8 patients per bin, for this study. And for
20 the 332 patients tested for colony formation,
21 there would be 5.2 patients per bin. These
22 frequencies would be too low to be meaningful.
23 Now, out of the eight drugs tested and
24 reported, the only drugs to use today, and are
25 they not used in combination, the efficacy, the
1 efficiency of these tests must be demonstrated
2 with each chemotherapeutic agent in use today,
3 and for each combination of agents, each type of
5 Now, just a couple final points. It's
6 unclear why this study provided two sets of
7 thresholds instead of one. Further, although two
8 thresholds were tested for significance, three
9 were presented in the text, shown in the tables
10 and figures. The first threshold is 45 to 75,
11 and the second one was 15 to 40. In this study,
12 the thresholds that were selected to assess on
13 the same population that were used to determine
14 the optimal thresholds. This elementary mistake,
15 reporting the results from the population used to
16 create the threshold, rather than the results of
17 an independent population, always results in the
18 overestimation of test accuracy.
19 The outcome was standard response
20 criteria. We are never given a definition of
21 what standard response criteria are. We don't
22 know who got the chemo and why. We don't know
23 the study population. We don't even know the
24 outcome. We are never given definitions of any
25 of those three. It's absolutely critical that
1 the specific response criteria employed by the
2 investigators be revealed if that is their
4 Now of course, Rich Simon, who many of
5 you know at the NCI, and others, have pointed out
6 that response is an unreliable outcome and should
7 be avoided if at all possible. So, okay. So
8 rather than the 64 sets of results that we were
9 looking for, two sets of results were presented,
10 one for each of the two thresholds. Each of the
11 results is across all eight cancers, all eight
12 therapies, and the tests, and the results are
13 there for the first threshold, 60 something
14 percent sensitivity, 87 specificity, 43 and 99.
15 Clearly, the sensitivity goes down as the
16 specificity goes up. Neither sensitivity or
17 specificity pair is very high. Combining all
18 results into one conglomeration provides no
19 information regarding the utility of the test for
20 each drug in terms of each cancer type. The
21 study should have reported the area of the ROC
22 curve, both tests, for the 64 sets of results.
23 Thank you.
24 DR. FERGUSON: Thank you very much.
25 We're actually at our time for a break.
1 Perhaps -- it is almost 2:30. If we take a
2 15-minute break, I think, yes, would you please
3 come back up, because there may be a couple of
4 questions for you.
5 (Recess from 2:25 p.m. to 2:45 p.m.)
6 DR. FERGUSON: I wonder if there are
7 any in the audience, or panel for that matter,
8 who would like to ask Dr. Burke some questions
9 related to his presentation? And also, Dr. Burke
10 has promised to give us the last few slides. Are
11 there questions for Dr. Burke from members of the
12 audience or from the panel?
13 Dr. Weisenthal, did you have a question
14 that you wanted to ask, or a comment?
15 DR. WEISENTHAL: I want to thank
16 Dr. Burke. He started off by paying me
17 compliment, and he said of Dr. Kern and I's
18 paper, that this is one of the most sophisticated
19 papers that he's ever read. I've also been
20 talking to critics of these technologies for 20
21 years, and that's the most sophisticated
22 criticism that I've ever had, so I want to
23 congratulate you on that.
24 There are several points that were
25 raised in your talk which should be addressed.
1 Just to begin with, the study by Kern and
2 Weisenthal that you spent the bulk of your time
3 reviewing, just to begin with that, you brought
4 up several methodologic criticisms and raised
5 questions about patient selection and so forth.
6 I want to remind everyone here that that was
7 published in the Journal of the National Cancer
8 Institute. I assure you it underwent rigorous
9 peer review. When we submitted our first draft
10 of the manuscript, the reviewers there had
11 certain problems with it and they had certain
12 things they wanted clarification of.
13 DR. BURKE: But that's an appeal to
15 DR. WEISENTHAL: No, no, no. Dr.
16 Burke, had you been one of the reviewers, no
17 doubt you would have raised those issues at the
18 time and we would have responded to those. And
19 I'd like to ask Dr. Kern now if he can respond,
20 so we're in consideration of that you were one of
21 the -- you know, you can't blame us because you
22 were not the reviewer of our paper. Had you been
23 there and helping us to get the essential
24 information out there, I'm sure it would have
25 been a better paper. But we'd like to address
1 those issues that you raised at this time, if
2 that's okay.
3 DR. BURKE: Absolutely.
4 DR. KERN: One of the points was the
5 selection bias. How could you end up with 450
6 correlations out of 5,000 patients in the study?
7 Well, the 5,000 patients was an overview of all
8 the tests that we had done in the laboratory. It
9 wasn't meant to imply that the clinical study was
10 based on 5,000 patients. And in fact, at the
11 Department of Surgery, UCLA, where I was, most of
12 the patients were treated with surgery or
13 radiation, not with chemotherapy.
14 Secondly, many of the patients that
15 received chemotherapy received adjuvant
16 chemotherapy. So the inclusion criteria of the
17 study to get to 400 patients included, first,
18 patients had to have advanced disease; second,
19 they all had to have objectively measurable
20 disease, either by CT scan, x-rays or so on.
22 Now, as far as another comment that you
23 made about one study, but it's not been
24 independently validated, I think I may ask Dr.
25 Bosanquet to address that issue, because he
1 published an article in Lancet a couple of years
2 after our paper.
3 DR. BURKE: Did you want to address any
4 of the other issues that I brought up?
5 (Inaudible response from audience.)
6 DR. BURKE: I mean, this is not an
7 opportunity for us to get into whether the study
8 has been validated or not at this time. That was
9 just an issue that I raised, and perhaps at
10 another forum that can be addressed further. I
11 think we have time limitations.
12 DR. KERN: Well, I will try to answer.
13 DR. BURKE: So keep going. There were
14 a lot of issues.
15 DR. KERN: Bring up a couple of the
16 issues, remind me of them. Let me see what you
17 consider a serious objection.
18 DR. BURKE: Well, the selection, the
19 patient characteristics, the criteria for who got
20 what treatment.
21 DR. KERN: Let's go one at a time. Who
22 got what treatment was determined independently,
23 not by the assay, but by the disease type. The
24 patients went on standard protocols. Most of the
25 patients who ended up at being UCLA, an academic
1 center, were all on some sort of clinical trial,
2 randomized trial protocol.
3 DR. BURKE: What were the standard
4 protocols? What was the response criteria that
5 you used?
6 DR. KERN: The response criteria were
7 the ECROG criteria of partial response and
8 complete response.
9 DR. BURKE: And what percentage was
10 each in terms of your study?
11 DR. KERN: I'm sorry, I don't
13 DR. BURKE: In other words, in terms of
14 response, global response measured, and what
15 percentage of these patients were partial
16 responses, what were complete, and then at that
17 time, how were those defined in your study
19 DR. KERN: Okay. The responses were,
20 again, just by objective measurements. It was
21 retrospective, but scans, x-rays. And the
22 complete response, obviously, complete
23 disappearance of the disease. Partial response
24 was by the criteria of two dimensions and the
25 shrinkage of at least half in two dimensions.
1 Standard criteria.
2 DR. BURKE: But this was a
3 retrospective study where you went back to the
4 charts. We all know about the paucity of
5 information and the error of information, and in
6 follow-up information not being in the charts.
7 How did you manage those issues in your
8 retrospective study?
9 DR. KERN: Well, the follow-up --
10 obviously, there are problems, and I'm not trying
11 to say there's not biases in it. We all know the
12 disadvantages of retrospective chart reviews.
13 The only thing I can tell you is what actually
14 was done, two oncologists reviewed the charts and
15 made their best decisions of what the responses
16 were, based on measurable criteria.
17 DR. BURKE: What did they do when they
18 disagreed? What did they do when information
19 wasn't there? What did they do make sure it was
20 accurate information? Do you want to continue
21 with this?
22 DR. KERN: No, I cannot say that I can
23 answer every question. I mean, I'm not an expert
24 in your field.
25 DR. FERGUSON: One brief, and then
1 we'll let Dr. Bosanquet speak.
2 DR. WEISENTHAL: This is really
3 important, okay? You know, you talked about an
4 eight-by-eight table, and we only have one
5 point. You know, a study like this is never
6 going to be done again in the history of the
7 world. Never again are you going to have 330
8 patients treated with single agents. The
9 important thing about it was that this was an
10 honest blinded study in the following fashion,
11 and that is that the clinical results were
12 determined independent of knowledge of assay
13 results. The clinical results were reported to
14 the Department of Biomathematics at UCLA; they
15 were like the stakeholder in this, they had the
16 clinical assessments. Likewise, they received
17 independently from the laboratory the laboratory
18 assessments, and then the correlations were made
19 as stated.
20 DR. BURKE: Let's just deal with that
21 issue for a moment, because Dr. Kern sat down and
22 you stood up. So we've got the 64 bin table,
24 DR. WEISENTHAL: Yes.
25 DR. BURKE: And the issue is, how do we
1 know the utility of this test for a chemotherapy
2 in a disease?
3 DR. WEISENTHAL: Okay. You're making
4 the same criticism as Maury Markman has made.
5 What Maury Markman says is as follows, and he
6 says that he notes that there have been no
7 prospective randomized trials.
8 DR. BURKE: That's not my question.
9 DR. WEISENTHAL: Wait a second. But
10 it's the same thing. He says that even if some
11 day there were to be a prospective randomized
12 study, that that would only apply to that one
13 particular situation, and it would not tell you
14 anything about all the other situations.
15 You know, the sort of information that
16 you're asking for in the real world will not be
17 available for 20 to 50 years, if ever.
18 DR. BURKE: No, no. I understand the
19 mitigating circumstances. But the question is,
20 if you want this test to predict a particular
21 chemotherapeutic regimen in a particular disease,
22 then I want that information, and I don't have it
23 in your study.
24 DR. FERGUSON: Okay. I am going to ask
25 for Dr. Bosanquet to give his response, and then
1 we are going to go ahead. We will try to have a
2 little more time at the end.
3 DR. WEISENTHAL: There's an extremely
4 important point. Basically he started out his
5 talk denigrating -- in other words, I made the
6 point that we have 35 studies consistently
7 showing the same thing, and he denigrated that,
8 and he said, oh, that's just due to consistent
9 bias. And I would like to prove to you that that
10 is not true.
11 DR. BURKE: Excuse me. I didn't. I
12 posed an alternative hypothesis. I said there
13 are two hypotheses for the 35 consistent studies.
14 Assuming that they are consistent, which we
15 have no evidence of, but assuming that they are
16 consistent, it could be due to two things. It
17 could be due to the fact that there is a
18 phenomenon there, or it could be due to
19 consistent study bias. And until you eliminate
20 the alternative hypothesis, you haven't done
22 DR. WEISENTHAL: I would like to then
23 eliminate the alternative hypothesis and prove to
24 Dr. Burke that we have indeed done science in
25 this setting.
1 DR. FERGUSON: Can you do that in the
2 final hour?
3 DR. WEISENTHAL: Okay.
4 DR. BOSANQUET: It was stated that
5 there was no independent validation of this. We
6 actually took the data that we published in the
7 Lancet the following year. This paper that we're
8 discussing is 1990; we published this work in
9 1991 in the Lancet, using CLL patients. And we
10 also looked at extreme drug resistance in these
12 We got this. We found 22 of 119
13 patients had extreme drug resistance in vitro,
14 and none of these patients responded. So here is
15 one of the things that we would speak to, which
16 was independent validation in a completely
17 different set of circumstances in a different
18 laboratory, and we find exactly the same thing,
19 extreme drug resistance, no response.
20 DR. FERGUSON: Using the same cutoff
21 points that were determined by Kern Weisenthal, I
23 DR. BURKE: Just to respond briefly to
24 that, two points. One, that is not a replication
25 of the 1990 paper.
1 And number two, I do suspect that
2 that's exactly correct, that it is disease and
3 treatment specific. And that's exactly my
4 point. That's exactly my point. You have to
5 talk a specific disease, a specific treatment,
6 how does the test do? Not a conglomeration of
7 diseases and treatments together. That's exactly
8 my point. Thank you.
9 DR. FERGUSON: Thank you. Dr. Burken.
10 DR. BURKEN: Hi, everybody. Can
11 everybody hear me okay? I am Dr. Mitch Burken, a
12 medical officer with the coverage and analysis
13 group at HCFA. What I'd like to do is try to tie
14 together some of the presentations from earlier
15 in the day. There will be a lot of material in
16 here that you've seen before, but what I want to
17 do is try to wrap it up, and wrap it up in a way
18 that's consistent with Dr. Bagley's opening
19 remarks around 8:00 this morning, looking at the
20 broad sweep of the evidence, not spending as much
21 time on specific papers as much as trying to see
22 the bigger picture, cutting across many assay
24 Well, as I said, for the first several
25 minutes I want to be as conceptual as possible,
1 and then we'll get more into the bulk of the
2 evidence itself.
3 But let's think about why we would
4 order any type of lab test, okay? A lab test has
5 its maximum clinical utility when the disease
6 probability is most uncertain. In other words,
7 we heard a little bit about the 50-50 point, and
8 the naive Bayes condition, and so forth, but let
9 me just try to restate that in a slightly
10 different way. If we have any type of lab test
11 we're looking at, and exploring questions of
12 clinical utility, okay? What's the probability
13 that the patient has a disease? If a patient is
14 very unlikely to have the disease, then what kind
15 of information do you have when you get a lab
16 test result? It's certainly not very very high.
17 And the reciprocal situation, where we
18 have a very very high probability or prevalence
19 of disease, and then the lab test doesn't really
20 add a whole lot, because we're almost positive
21 the patient has the disease. It's when we're
22 unsure of ourselves, and when we are at that
23 50-50 point, that's when a lab result can really
24 begin to add value.
25 Well, where are we in the Medicare
1 program? The panel is charged with trying to
2 demarcate what's reasonable and necessary with
3 respect to human tumor assay systems, okay? And
4 we need to find a spot in this, or we need to
5 kind of bracket an area of this graph where lab
6 testing -- and again, we'll talk about the HTAS
7 in a second. But where is lab testing most
8 reasonable and necessary? Where does it add
10 Let's talk about now about applying
11 this more generic situation to human tumor assay
12 systems. Well, let's talk about the
13 chemosensitivity scenario. We talked all day
14 about how this testing can assist clinicians in
15 selecting effective single agents. Okay?
16 Conversely, the chemoresistance scenario is where
17 this assay, or where these assay systems can
18 avoid ineffective agents. And what's our
19 reference here? The reference is data from
20 published clinical trials; maybe they're in peer
21 reviewed journals, maybe unfortunately they're
22 just in abstracts that are available at ASCO
23 meetings. But again, there is information from
24 clinical trials that does provide a backdrop
25 against which one can look at this lab testing
1 and the added value thereof.
2 So let's go back to our graph again.
3 In vitro testing has the greatest clinical
4 utility when the presumed sensitivity or
5 resistance, because remember, they're really
6 reciprocal functions of each other, is most
7 uncertain. And going to our X axis here, what's
8 the real question? The question is, is tumor X
9 sensitive or resistant to drug Y in patient Z?
10 Therefore, we need to be specific as to what
11 questions we're posing.
12 Well, let's talk some more about
13 clinical utility, because as I said, what we want
14 to do is look at the broad sweep of the
15 evidence. We've talked about different outcome
16 measures today; we've talked about clinical
17 response; survival; we've talked even a little
18 bit about quality of life, although in the packet
19 of materials there is really not a lot of quality
20 of life literature to discuss, so we won't really
21 get into that.
22 And in looking at clinical responses
23 and outcome, we need to identify robust
24 two-by-two data using a valid gold standard, and
25 from there we can look at different performance
1 measures. In this case, I think it's valuable to
2 look at positive predictive value as a marker for
3 chemosensitivity, and negative predictive value
4 as a marker for chemoresistance. One could also
5 talk about the sensitivity or specificity, but
6 let's try to keep it just a little bit simpler,
7 let's focus in on some concepts, and not worry so
8 much about the math. There are others in the
9 room who may be more expert, but let's try to
10 keep it simple, and not get too wrapped up in the
11 numbers, but let's try to get wrapped up until
12 the themes and the concepts.
13 As we discussed earlier this morning,
14 as Dr. Bagley emphasized, we have to insure that
15 the biases, such as insufficient sample sizes,
16 don't substantially influence our results.
17 Going back to our graph now, in 2-D
18 rather than 3-D, let me emphasize a point that
19 I've said already, but let me reemphasize it
20 again. That if you are at the extremes of this
21 utility function, okay, where the lab test,
22 whether it's human tumor assay system, or a serum
23 sodium, or whatever it is, or a chest x-ray, any
24 type of diagnostic test, if you're at the extreme
25 regions of this utility function, it doesn't
1 really matter what your predictive values are.
2 If the predictive value is high, it can be offset
3 by the fact that you are in an extreme region of
4 the utility function where those numbers don't
5 really mean as much. Okay? And we'll talk more
6 about that. Okay?
7 Well, what kinds of measures do we need
8 to evaluate test accuracy? The ones that I
9 talked about below, predictive values, but there
10 are also sensitivity, specificity, area under the
11 ROC curve, but let's talk about something else.
12 What about some of the physician concerns. In
13 the lab, what kinds of things can a physician
14 tell his or her patient when a particular tumor
15 can or cannot be assayed by the lab?
16 On the right side of the slide are what
17 I would call the quality control measures, and
18 we're not really going to spend time on those in
19 this particular, at least my particular
20 presentation, but you've heard from FDA earlier.
21 So let's just kind of stay on the left-hand side
22 of the slide for now.
23 Well, just to get back to a couple of
24 those issues that really cut to the heart what
25 physician concerns might be, you know, is there
1 sufficient assessability or evaluability of the
2 tumor cells from the submitted specimens? And we
3 found out that some of the earlier clonogenic
4 assays had very very -- had relatively low
5 assessability or evaluability rates. But let me
6 pose another question.
7 Even if a particular assay format is
8 evaluable 90 percent of the time, it still might
9 mean that 10 percent of the time the physician
10 speaks with his or her patient, and they really
11 just can't get an adequate result. So I think
12 that's an issue. You know, even if it's 90 or 95
13 percent, there is still some percentage of the
14 time when you don't have a result and you come
15 back. Okay?
16 There are other issues that come into
17 play. What's the effect of tumor heterogeneity?
18 We talked a little bit this morning about tumor
19 heterogeneity, but there's another type of tumor
20 heterogeneity as well, and that's the type that
21 can occur within the same patient, so that a
22 primary tumor and its metastatic lesions have
23 different in vitro patterns. And again, that is
24 a consideration to keep in mind when we're
25 thinking about this type of testing.
1 So as I mentioned, in vitro results for
2 solid tumors from one site may not always provide
3 the same result as other sites. However, there
4 is a paper back in 1986, we'll get to it a little
5 bit later, I'll touch on it again, but it shows
6 that this problem may not be quite as pronounced
7 in clonic lesions such as leukemias.
8 Well, there is a whole host of in vitro
9 assay formats and we can, as I say, just kind of
10 go through those. But I think it's important to
11 mention at the end here that we will not in this
12 presentation be going through the clonogenic
13 literature. When we reviewed this material at
14 HCFA, we didn't feel there would be a lot of
15 value in discussing the older technologies that
16 had the lower evaluability rates, and just felt
17 it would be better to present it to the panel
18 this way.
19 Well, what kinds of criteria do we use
20 to evaluate the literature? Again, our goal here
21 is to be broad based. We looked at peer review
22 journals in English. There were some manuscripts
23 pending publication that were necessary for panel
24 discussion. There were a couple of the assay
25 formats that were relatively recently developed
1 that we felt we would not be fair to the
2 requesters if we excluded some of the
3 manuscripts. There was, for example, Bartels
4 chemoresponse assay was FDA approved back in 1996
5 and package inserted in the summary of the safety
6 evaluation data was included as a way of
7 evaluating that. And we did not look at abstract
9 Based on that, what types of additional
10 search methods? Well, we -- again, we looked at
11 articles submitted to HCFA prior to November 1st,
13 DR. FERGUSON: Mitch, can you speak
14 into the microphone?
15 DR. BURKEN: Right. When we started
16 reviewing this sometime, sometime before the
17 panel itself, we found that the Fruehauf and
18 Bosanquet review article from the 1993 PPO
19 updates, crystallized many of the issues. And
20 what we did, based on it, there were some summary
21 tables that were actually presented this morning,
22 where they looked at groups of studies. And I
23 again refer you to summary table seven and eight
24 from the 1993 PPO. And as a result, we really
25 focused our efforts, our literature efforts on
1 the EDR, as well as also some of the other
2 thymidine assays, because there were some other
3 thymidine uridine incorporation assays pertinent
4 to bring to the panel. And then we also did a
5 lot of sampling of DiSC and MTT, using a MEDLINE
6 search, and we did not have any time limit on our
8 And when we went through and did our
9 literature search, then we had to figure out what
10 we would want to present to the panel. And since
11 clinical response was one of the outcomes we
12 looked at, as well as survival, we needed to have
13 confidence in the viability of our two-by-two
14 tables. So as a result, any study that lacked
15 the clinical criteria -- either the -- the
16 clinical criteria either had to be documented or
17 referenced, you know, for clinical response. We
18 only looked at adult patients.
19 And just for the record here, in the
20 rather extensive handout which has been provided
21 for this session, we do list the pediatric
22 studies that have not been summarized in this
23 presentation, but there is a notebook of all the
24 studies that are being presented in this
25 presentation, are available. I know it's kind of
1 hard to read the whole notebook tonight, but as a
2 supplement to the materials you already received,
3 there are papers in here such that anybody that
4 has any questions about any of the bullet items
5 from this afternoon's presentation can go back to
6 this, as well as your other materials.
7 We included both prospective and
8 retrospective two-by-two data designs. The only
9 thing we did exclude for this, again, for this
10 panel presentation, were descriptive type studies
11 that didn't use any quantitative summaries.
12 There were some studies that went beyond
13 two-by-two tables, used regression analysis and
14 some other techniques, and those were included.
15 Now talking about all these studies,
16 you know, how can we present these studies to the
17 impact panel? Can we group them or pool them, or
18 do we need to go through them individually? It
19 was something that we really had to spend some
20 time thinking about. And we came to the very,
21 very strong conclusion that data from the
22 individual studies should not be pooled. The
23 reason being is that they're, the studies are so
24 heterogeneous, they use different cutoff points,
25 different tumor drug combinations, different
1 clinical response criteria, that we just felt
2 very uncomfortable about doing a meta-analysis
3 for the purposes of presenting data to the panel,
4 okay? So therefore, each study must be presented
5 on its own merit, and I think that's a
6 fundamental approach to presenting data this
7 afternoon, and probably lengthens the
8 presentation a little bit, but we feel it's
10 So now, let's just go through the
11 evidence. Let me just walk through the handout
12 with you. I don't -- there is a lot of bulk on
13 my slides, but again, it's in the handout and it
14 is really set up to be a reference guide to
15 trying to put it all together.
16 The assay formats I start with are not
17 based on cell death versus cell proliferation.
18 It's not done that way. I went from the assay
19 formats that we concentrated on, as in the EDR,
20 the DiSC and the MTT, where we really had the
21 most literature, and then towards the end I have
22 some of the other formats where there was a
23 little less literature that came up, based on the
24 criteria that were described in previous slides.
25 The Kern and Weisenthal article from
1 1990 is a complex article that was referred over
2 to Dr. Burke and Dr. Handelsman for separate
3 review. Again, a central piece of evidence, but
4 highly complex.
5 But let me go through some of the other
6 articles that pertain to EDR as well as some of
7 the other thymidine uridine incorporation
9 Eltabakkh, '98, shows, you know, some
10 PPVs and NPVs. There were some confidence
11 intervals that are reported. As you can see, the
12 NPVs in this study is actually fairly low. Let
13 me start, as I said, rather than to go through
14 all the bullets, let me try to highlight what are
15 some of the themes. As I said, there were a
16 hundred new patients with ovarian cancers. We
17 find out in this study, all the patients were
18 recruited prior to chemotherapy, which is
19 important when we think about selection bias.
20 And we found 75 evaluable patients, so we went
21 from about a hundred down to 75, which is really
22 pretty good.
23 Fernandez-Trigo is a study, again, that
24 also has some case loss of about roughly 25
25 percent. But in this case, there was a very rare
1 site cancer that was selected, so I would just
2 keep that in mind.
3 Moving on to some of the other
4 thymidine uridine formats, you know, I talked
5 about the CRA, the Bartels CRA, and it turns out
6 that there was a study by Elledge back in '95
7 that enumerates the findings of clinical trial.
8 And one little, kind of I suppose warning to the
9 panel when evaluating this paper, I mentioned
10 that NPV is a marker for chemoresistance and PPV
11 being a marker for chemosensitivity. Well, in
12 this particular article, not the article but the
13 package insert, and the summary safety and
14 evaluation data, it's flip-flopped, so you have
15 to be a little bit careful there. It turns out
16 you have to reverse that, so you have to really
17 be wide awake when you read these two-by-two
18 tables. And I mention that down here, that the
19 two-by-two table design differs from the other
20 studies presented, even differs from the Elledge
21 paper, which is -- the Elledge paper comes out
22 before the FDA submission data. And this was a
23 prospective blind enrollment of 60 relapsed
24 breast cancer patients. The interesting thing
25 about this particular assay format is that it was
1 very specific for breast cancer and 5-FU.
2 Well, what about some of these earlier
3 five-day thymidine uptake assays? Sondak in '84
4 had a series of 142 patients with successful
5 assays out of a pool of 219, with 33 clinical
6 correlations. Quite a bit of case loss here,
7 even though, again, you know -- well, the numbers
8 are small, but the NPV, again, is high. You
9 know. But one, again, has to be concerned about
10 possible selection bias.
11 Sondak in '85, 819 mixed solid tumors,
12 again, if you use different cutoffs, you're going
13 to have different PPVs and NPVs.
14 Sanfilippo, in '81, there were several
15 studies on three-hour incubation, rather than the
16 five days, you know, and there were -- as I said,
17 you can see the numbers here. I think the
18 interesting thing about this particular study in
19 '81 was the use of subsets for high
20 proliferative and low proliferative non-Hodgkins
21 lymphoma cases.
22 And in '86, Sanfilippo, the same group,
23 went ahead and studied 169 patients with various
24 types of germ cell testicular tumors, but only 29
25 cases were available for clinical correlation.
1 Again, we didn't really know how many people were
2 previously treated or untreated, and that could
3 inject some bias.
4 More three-hour uptake assays, two
5 studies by Silvestrini in 1985 and Daidone in
6 1985 from the same institution as Silvestrini.
7 Different tumor types.
8 Well, let's kind of move along, and we
9 finished up with the thymidine/uridine
10 incorporation assays, and let's move on to the
11 DiSC assay. And as I say, there were several
12 papers that were reviewed in the Cortazar and
13 Johnson article, which is the review article in
14 1999, which did a MEDLINE search, and targeted 12
15 studies, and four of those 12 studies were DiSC
16 assay approaches in solid tumors, three of the
17 four studies being small cell lung carcinoma, the
18 other study being a non-small cell lung
19 carcinoma. And I think, you know, in each of the
20 studies, the test groups did at least as well as
21 the control groups. The survival data was not
22 particularly convincing. In the Gazdar study,
23 the survival rates were similar; in the Wilbur
24 study, the survival rate comparisons were not
25 am. Again, that is survival rate of assay
1 directed versus, you know, empiric therapy
2 groups. In both the Shaw and the Cortazar
3 articles, their survival rates were really not --
4 there was really not enough of a difference to
5 really hold much discussion.
6 But I think where we find a lot more
7 evidence, again, based on our structured review,
8 is looking at this, and hematologic tumors. And
9 we start with Dr. Weisenthal's study in '86 where
10 there is 70 cases. What we did was we subtracted
11 out the 29 cases of ALL. Again, it's just a
12 judgment case one makes as to how you want to
13 treat the pediatric tumors. I can tell you that
14 the pediatric performance table was just about
15 the same as for adults, and there weren't
16 significant differences, so I think what one
17 could --
18 DR. FERGUSON: Mitch, closer to the
19 microphone please, or maybe you should hold it.
20 DR. BURKEN: Yeah. As I said, one can
21 scan through some of the pediatric studies
22 quickly, and I think get a flavor for that. But
23 moving on, looking at the adult data, we had PPVs
24 and NPVs that were over 80 percent.
25 Dr. Bosanquet in 1999 had a fairly
1 elegant study, and it was reviewed earlier
2 today. I will leave the details to the group.
3 Another study that was also discussed
4 earlier today was a study by Mason that did some
5 modeling. In this particular case, not only was
6 there some clinical response data and survival
7 data that was looked at, but there was some
8 modeling done using regressions, where if you --
9 I know that the print is a little small at the
10 bottom, but I just wanted to mention that if you
11 look at the life years gained per assay, the
12 modeling here said that if you had a simulated
13 50-year old with stage C chronic lymphocytic
14 leukemia, there would be a life years gain would
15 be about six months, and about three weeks if it
16 was a simulated age 70 stage AB female. Again,
17 these are all simulations that are based on the
18 assumptions in the regression modeling. But it's
19 a little bit more of a sophisticated approach, as
20 I said.
21 And continuing on, there's been, as I
22 said, a fair amount of work in hematology.
23 Tidefelt in 1989, with more than 90 percent of
24 the patients not being previously treated. There
25 is a complex predictive value studies in this
1 paper with varying anthracycline concentrations
2 and different treatment regimens. But if you
3 flush out all 40, actually 40 out of 53 patients
4 were available for clinical correlations, and you
5 can see the PPVs and the NPVs.
6 To go on now, to continue more on the
7 hematologic DiSC studies, Bird in '86 was a small
8 study. That's the one I quoted earlier because
9 there was peripheral blood and bone marrow that
10 were used as two separate sites. And there
11 seemed to be reasonably good concordance between
12 in vitro testing, between peripheral blood and
13 bone marrow. But again, I caution, the sample
14 sizes are fairly small here. Bird in '88, again,
15 another small sample study.
16 I'm going back just some more. More
17 small sample studies. Dr. Bosanquet in 1983.
18 Dr. Beksac in '88. I think the interesting thing
19 about this study was it's kind of a mixed
20 retrospective prospective approach, so it was a
21 somewhat complicated study even though there were
22 only 16 patients.
23 Dr. Bosanquet's 1991 study that was
24 actually just up on the screen a few minutes ago,
25 showed 67 patients with CLL where there was a
1 survival benefit. But just before we leave DiSC
2 and the hematologic applications, you know, there
3 were some articles that didn't have documented
4 clinical criteria, and you see down, Dr.
5 Bosanquet's article. Again, we certainly looked
6 at the survival data, but we didn't feel that the
7 clinical criteria were adequately specified in
8 this particular paper even though, as I said, it
9 showed some survival data.
10 And Kirkpatrick from 1990 was a paper,
11 again, all in the backup book here that has some
12 pediatric data.
13 Moving on to MTT, several articles did
14 not have documented clinical criteria so that for
15 the purposes of this panel presentation, we
16 didn't feel it would be useful to present
17 clinical response data. And it's interesting
18 that three -- we talked about the 12 studies from
19 Cortazar and Johnson. Three of them were Yamaue,
20 1991, 1992 and 1996, but unfortunately, none of
21 those articles had, you know, adequately
22 described criteria, and we just didn't feel we
23 could construct good enough two-by-two tables.
24 And the survival rates in these studies do not
25 compare test versus control groups. And so, the
1 pediatric neoplasm studies are listed there.
2 Veerman, I think was also mentioned this
3 morning. But again, you know, it's our choice.
4 Well, let's move along then to some of
5 the solid tumor studies for MTT, and this goes
6 more or less in chronological order. We had Suto
7 in 1989, with GI solid tumors. Again, a very
8 small number of clinical correlations are
10 We had Tsai in 1990. This was from
11 cell lines from 25 patients with small cell lung
12 cancer. In this case we have regression modeling
13 as opposed to two-by-two data.
14 Furukawa has a larger sample, but
15 again, only 22 patients available for clinical
16 correlation. So the numbers may be high, you
17 know, an NPV of 100 percent and a PPV of 75
18 percent but again, with small sample sizes and
19 this degree of case loss, one really has to
20 wonder about the possible selection bias. And
21 then we see some survival benefits in this
23 Saikawa in 1994, 50 patients, 40 of
24 whom received post-surgical chemotherapy. This
25 was basically just divided up into two groups, an
1 adaptive group versus a non-adapted group.
2 Again, we have some survival data here as well.
3 Sargent, '94, 206 confirmed or
4 suspected epithelial ovarian adenocarcinoma
5 patients. 37 were previously untreated. And
6 again, we have a -- we were able to have survival
7 data on 37 of those 206.
8 We have a more recent study by Taylor,
9 again, stage three, four, previously untreated
10 adenocarcinoma. 43 available for clinical
11 correlation, or roughly 50 percent out of the
12 starting 90 were finally available after you
13 consider tumor evaluability and clinical
14 correlation. And we have a couple of subgroups
15 here for all treatments and platinum only.
16 Xu, 1999, it's in your packet, your
17 original green book. 156 advanced breast cancer
18 patients. And they actually noted in the study
19 itself that the source of selection bias -- well,
20 they didn't say they had selection bias, but they
21 did say that they preferentially recruited worse
22 prognosis patients in the MTT directed versus the
23 control group, which was certainly a source of
24 concern for those reviewing it.
25 And just a -- hematologic MTT tumors,
1 there's just a lot of those. If you go several
2 slides back, a lot of those studies like Veerman
3 and Hongo, and Hongo were excluded because they
4 were pediatric studies, but if you look at 23
5 patients with de novo AML and five in CML blast
6 crisis, 21 were available for clinical
7 correlations, with again, good looking predicted
8 values, but I think people should evaluate the
9 robustness of the numbers.
10 Then, just to kind of close out, again,
11 we wanted to be fair and not just presenting the
12 thymidine incorporation assays as well as DiSC
13 and MTT, so we did look at some studies from FCA
14 and some of the other assay formats. Leone had
15 78 cases in 1991. This is again, for those of us
16 that are trying to keep up with the different
17 abbreviations, this is the fluorescent cytoprin
18 assay, and see, the Leone study.
19 And then Meitner in '91 actually
20 extended the Leone data set and worked it up to a
21 total of 101 cases with similar NPVs and PPVs.
22 FMCA is a similar fluorescent method, a
23 little bit more recent. Some of the literature,
24 a Larsson study had 43 samples with 27 clinical
25 correlations. Again, the numbers look pretty
1 good. In this circumstance we did find
2 blinding. I'll tell you a little bit about
3 blinding towards the end, but not terribly often
4 did we see evidence of blinding in the studies.
5 Csoka is a more recent article, as I
6 mentioned. 125 patients with newly diagnosed or
7 relapsed ovarian cancers. 45 available for
8 clinical correlation. He did have a breakdown of
9 previously treated versus drug naive patients.
10 Blinding again was reported. And there was, in a
11 small group again, an NPV of 100 percent.
12 Again, moving along, Dr. Nagourney was
13 kind enough to submit a manuscript to HCFA on his
14 apoptotic assay. One thing I would mention is,
15 or question I would raise is, the manuscript was
16 just a summary manuscript, and from it we were
17 not able to determine how the EVA assay improved
18 treatment management beyond empiric treatment
19 regimens for the refractory patients. So that is
20 a question.
21 Then we also looked at several of the
22 HDRA papers that were submitted to us by Dr.
23 Hoffman and his company. Many of the articles
24 that were submitted to us are in a -- again, all
25 of it is available to the panel in a notebook
1 form, but what we did is we pulled out, and
2 they're also in here, the four, three articles in
3 a manuscript that are clinical correlations.
4 Many of the other patients were experimental and
5 pharmacologic studies.
6 I might add that there was a lot more
7 material that was submitted to HCFA than my
8 presentation would suggest, many many more papers
9 that we looked at. But many of them were
10 experimental pharmacology studies, and we didn't
11 feel that this particular venue looking at
12 medical necessity would be quite the right place
13 to get into a lot of extensive experimental
15 So looking at these four papers from
16 HDRA, again, small sample sizes, but again, you
17 can see the NPVs. Again, very few of the studies
18 had confidence intervals calculated, as you can
19 see throughout my presentation.
20 Furukawa in '95, this was presented
21 earlier in the day. Post-surgical stage three to
22 four patients. Mixture of gastric and colorectal
23 tumors, and similar findings that we've seen.
24 Kubota, 1995, stage three four gastric
25 cancer with somewhat -- when I bulleted these for
1 you, what I have done is I've not gone into all
2 the different subgroupings, so therefore, some of
3 the sensitive groups range from a sample of 20
4 to 38, and the resistant groups from 89 to 99,
5 but I have not gone into great detail to specify
6 all those subgroupings. I am trying to give the
7 main message here.
8 And then Dr. Hoffman's article that's a
9 manuscript in press. Again, more gastric and
10 colorectal tumors.
11 Well, as we try to summarize all the
12 literature and take this kind of view from the
13 hillside here, the Cortazar and Johnson article
14 helps do that a little bit, in the sense that
15 they've selected out 12 prospective trials. I
16 mentioned that four of them were the DiSC studies
17 that I outlined, the three MTT studies that I
18 didn't present to the panel because of the lack
19 of documented clinical criteria, and five of
20 those 12 studies were the earlier clonogenic
22 Overall findings from the 12 study
23 review, showing that only a small percentage of
24 patients have actually been treated with an in
25 vitro selected regimen, and that's certainly
1 consistent with many of the studies that I have
2 presented along the way, demonstrating case
3 loss. And most of the patients have had advanced
4 stage solid tumors. The overall assessability
5 rate only being 72 percent, but I think it's
6 really quite fair to mention that five of those
7 12 studies were from the earlier clonogenic
8 methods, so that would have a negative impact on
9 the overall evaluability rate since again, we're
10 talking about five assay formats that were not as
11 technically advanced as DiSC and MTT.
12 And these trials, the response rates
13 among the directed therapy patients were at least
14 as good as those achieved with empiric therapy,
15 and five of the 12 trials illustrated survival
16 data for the directed versus empiric therapy, but
17 it was difficult to determine overall trends in
18 these five studies, including three DiSC trials.
19 In only one of those trials was there
20 randomization, and in that particular trial all
21 the experimental arms consisted of small sample
23 So where are we, at nearly the end of
24 the day here? I think we found in going through
25 this systematic review that there is not strong
1 convincing medical evidence to support the
2 overall clinical utility of human tumor assay
3 systems. The comprehensive literature review
4 demonstrates that there were many different tumor
5 drug combinations among different studies and
6 this made it difficult to really make conclusions
7 about particular tumor drug combinations because
8 of this variability. And that's really kind of
9 what I would call a structural feature of
10 reviewing so many articles. Many of them had
11 small sample sizes. We had frequent selection
12 bias, recruiting documented or possible
13 refractory patients.
14 Remember, let's go back to our utility
15 function where we are thinking about being in the
16 center of that function or at the extremes, and
17 if you are recruiting patients into the study who
18 are at the extremes of that utility function,
19 then there is a concern that regardless of
20 whether the negative predicted values are high or
21 not, you're not getting a lot of clinical
22 utility. And in the same vein, by recruiting
23 advance stage patients, you may be getting
24 yourself into a situation where without lab
25 testing, you pretty much know that a patient
1 isn't going to respond anyway, therefore your
2 negative predictive value or your positive
3 predictive values are going to be adversely
4 affected by such selection bias.
5 And I've noted before that there was
6 only rare or occasional documented use of
8 Well, that's the broad sweep, but when
9 we go down and we look at it in a little more
10 detail, we really should note that there were a
11 relatively higher number of clinical correlations
12 available for DiSC and MTT assay formats. As a
13 result, the human tumor assay systems may have a
14 greater potential clinical utility for
15 hematologic neoplasms such as CLL, where there
16 has been really a fair amount of work, then solid
17 tumor. And when considering this literature,
18 let's never forget, you know, the importance of
19 evaluability and heterogeneity in making
21 And again, we are still in the tough
22 spot of trying to apply single agent drug tumor
23 interactions to multiple agent regimens, and I
24 think a certain amount of inferences have to be
25 made from these, you know, in vitro studies.
1 Thank you.
2 DR. FERGUSON: Thank you. Dr.
4 DR. BOSANQUET: Thank you, Dr. Burken,
5 for summarizing that. I'm glad to see the up to
6 date work has been included in the production.
7 You spent some time on your clinical
8 utility curve at the beginning. I wonder if you
9 could explain to us all how that was
10 mathematically derived, because I would have
11 drawn a different curve.
12 DR. BURKEN: Well, one could, I suppose
13 one could argue that rather than being a
14 triangular distribution, it could be a normal
15 distribution and have a slightly different look
16 to it. But I think what we ought to do is agree
17 on the fact that a lab test is most valuable when
18 you're most unsure of whether the patient has a
19 disease or not.
20 DR. BOSANQUET: I quite agree with
22 DR. BURKEN: I think that's a critical
23 point, and I think that ought to be established,
24 and that the value of any lab test is going to
25 drop off considerably if you're at the extremes
1 of prevalence.
2 DR. BOSANQUET: Well, that's the bit
3 that I would necessarily disagree with. You have
4 drawn a triangular curve here, if I can call it a
5 curve. We all agree, I think, with the 0 percent
6 on the left and the 0 percent on the right, or
7 the low added information at both left and right,
8 and the very high added information in the
9 middle. But just the shape of that curve, and
10 you have spent some time on it, and I just would
11 ask you again, how did you mathematically define
12 that? Because if you look at the Bayesian
13 curves, I think you would find a mathematical, if
14 you define that mathematically from the Bayesian
15 curves, I think you'd get rather a different
16 curve, and your conclusions form this bit of the
17 talk would then be different.
18 DR. BURKEN: Yeah. Let me just say
19 that I, you know, I'll admit up front that this
20 could have been a normal curve rather than a
21 triangular function. But the most -- rather than
22 getting bogged down in the mathematics, I think
23 it's important for the panelists to consider the
24 question of mapping out -- let me go to this next
25 graph. What we need to do is we need to kind of
1 map out an area where we feel laboratory testing
2 is reasonable and necessary. Now we're not --
3 I'm not standing up here and telling you that
4 that cutoff point -- it happens that I have the
5 yellow box here at maybe 20 percent or 80
6 percent. I'm not coming out and telling you that
7 there's any mathematical validity to making the
8 box 20 percent and 80 percent. What I'm trying
9 to do is illustrate a concept of how a lab test
10 becomes less useful as it drops off away from the
11 50-50 point.
12 DR. BAGLEY: Would it be fair to say
13 that although the Bayesian curve which you
14 showed, which is mathematically derived, deals
15 with the probability of a correct diagnosis,
16 whereas what we're dealing with here is not the
17 probability but the clinical utility of
18 increasing that probability? I mean, as the
19 certainty of the disease goes higher, the
20 probability, you know, based on a combination of
21 tests, is also going to go up. But as that
22 probability becomes more certain, the clinical
23 utility or the incremental value of that
24 additional information becomes less. And I think
25 this is an expression of the value of the
2 DR. BOSANQUET: I quite agree with you,
3 but you see, many of these tests are used on
4 resistant patients, where the pretest probability
5 of response is very low. And what Dr. Burken is
6 implying by this curve is that if you have less
7 than 20 percent pretest probability of response,
8 then these tests aren't very useful. And I would
9 challenge him, and I think he admitted that this
10 is not mathematically defined.
11 If we could just have a look at the one
12 slide that I've got? We've seen this slide
13 before, and the important thing is, if you take a
14 pretest probability of response of, say 5
15 percent, Dr. Burken was suggesting that anything
16 below 20, the test was not going to add any
17 information. But if you look at this, the
18 information is added at very low levels, because
19 if you take a patient who has a pretest
20 probability of response of 5 percent, you can
21 split that into test sensitive patients who have
22 a probability of response of 20 percent, and
23 those who have a probability of response of 1
24 percent. And I think that's, I would disagree,
25 and I think that would be a useful addition of
1 information to those patients with very low
2 pretest probability of response, so anything from
3 5 percent on. And therefore, I would suggest
4 that the curves you were showing were somewhat
5 misleading. That's all I'd like to say.
6 DR. BURKEN: Yeah. What I'm going to
7 do is I'm going to let Dr. Burke pitch in a
8 little bit with some of the mathematics. But
9 again, I want to emphasize that the schematic
10 that was put up did not, you know, was not --
11 that box did not mean to imply that 20 percent or
12 80 percent or 30 percent would be some type of
13 cutoff that this panel would be expected to
14 respond to. The diagram is simply there to
15 conceptually show in actually probably more even
16 a qualitative way than a quantitative way, that
17 there is simply less value from a lab test among,
18 in a situation where you are very sure of a
19 disease, or you think the probability of disease
20 is so low, that's also another scenario where the
21 test wouldn't be terribly useful, and that the
22 inference from that diagram -- so let me flip it
23 later on to this one.
24 And again, please don't read any
25 cutoffs in here that, where the red triangles
1 begin at 20 percent or 80 percent, please don't
2 read it that way. But the purpose of that
3 diagram is simply to show that at the extreme
4 regions of low probability or high probability,
5 if you have studies with selection bias, where
6 patients who were recruited into a study are in
7 the extreme regions of that utility function,
8 what it can do is detract from the power, or I
9 don't want to use that word power because that's
10 a statistical term and I'll get myself into
11 trouble with some of the statisticians. It can
12 detract from the ability to use positive and
13 predictive negative values as a marker of
14 clinical utility. You just have to be aware of
15 what kinds of patients you have in your study
16 before you can go to bat with high NPVs or PPVs
17 to make a case for a lab test, any lab test.
18 DR. FERGUSON: Dr. Burke.
19 DR. BURKE: Thank you. There's a
20 couple points that have to be made. One is, when
21 you -- I mentioned briefly the 50-50 situation,
22 which is the fair test for a test. But what
23 happens is if you look at the accuracy of a test,
24 it's very difficult to find therapy dependent
25 prognostic factors that are really accurate.
1 It's really hard to do. If you're looking at
2 estrogen receptor status, the area under the ROC
3 is about .62. Okay? So it turns out that these
4 factors are fairly weak. This test is a therapy
5 dependent prognostic factor. And the issue
6 becomes, if your test has an accuracy of .62, but
7 being a naive Bayesian gives you an accuracy of
8 90 percent correct, okay, then the issue is, what
9 are you going to be? So the bottom line is that
10 you have to look at the marginal utility of your
11 test in relation to the population you're using
12 it in.
13 And I think what Mitch's slide is
14 pointing out is not a particular shape, but the
15 fact that it becomes harder and harder to exhibit
16 any marginal utility as the prevalence of the
17 disease goes up. Because eventually, you're
18 going to become a naive Bayesian, because that is
19 the correct approach when the prevalence becomes
20 very high. And in fact, it's true, that is the
21 correct thing you should become, because your
22 test is not as good as predicting the
24 Now, the other thing is, you could say
25 well, maybe my test can help a little bit at the
1 limit. But the problem is, your test has a
2 variance associated with it. And then the
3 question becomes, as the prevalence goes up, it
4 becomes almost acentotic, and so you have very
5 very little room to move, and your test variance
6 can take up that room, so you'll never know
7 whether you're doing anything good or not.
8 DR. FERGUSON: Dr. Fruehauf?
9 DR. FRUEHAUF: I would like to thank
10 Dr. Burken and Dr. Burke for telling me that I
11 should be a naive Bayesian, although I don't know
12 what that is yet. I think statistically, I'd
13 like to use this curve, because I agree with
14 this. I think this is true. I think your points
15 are valid and I'd like to use this as an example
16 of how there is a relationship between what we're
17 talking about and what you're talking about,
18 because I don't think they are separated.
19 Now, let's use this curve. Here we
20 have the probability of response in the middle of
21 50 percent, because we're using these tests to
22 predict response, not whether disease is there,
23 so we're making an assumption that disease is
24 equivalent to resistance or sensitivity; true?
25 DR. BURKEN: Well, I'm a little
1 concerned. You know, we may not want to overplay
2 this one issue.
3 DR. FRUEHAUF: This is your model, and
4 you're relating it to in vitro drug response, so
5 please tell me, how does this curve relate to in
6 vitro drug response? What is the relationship
7 between the X and Y axis, and treating patients
8 with chemotherapy?
9 DR. BURKE: Well, let me tell you the
10 way I would choose to use it, okay? And just to
11 make sure we're on the same wavelength here. The
12 way this graph is designed is that that block,
13 the granite block in the middle that's gray,
14 wherever we should have those cutoffs, and we
15 won't argue about that, demonstrates that there
16 is a lot of information added by lab testing in
17 that region, because there is enough uncertainty
18 about whether a patient is either resistant or
19 sensitive to that particular drug, and again,
20 they're reciprocals of each other, so I can use
21 them interchangeably.
22 DR. FRUEHAUF: Okay. Can I go from
23 there? I understand that.
24 DR. BURKE: But let me say, let's not
25 talk about that it measures response. It's the
1 height of the graph, the Y axis, is how much
2 value there is from the lab test result.
3 (Inaudible question from audience.)
4 DR. BURKE: Basically it is simply
5 saying that the greatest uncertainty is at 50
6 percent prevalence, right, of response,
7 non-response, whatever the case might be that is
8 your gold standard.
9 DR. FRUEHAUF: Sure.
10 DR. BURKE: And the issue simply is
11 that if you set your study for a 50 percent
12 response or non-response, and then you test your
13 drug or whatever in that population, then the
14 prevalence isn't going to help you or hinder you
15 in your predictions.
16 DR. FRUEHAUF: Yes, I appreciate that.
17 So let's take Tamoxifen and breast cancer,
18 previously untreated breast cancer. If we give
19 Tamoxifen to women without knowing their receptor
20 status, there's a 30 percent response rate across
21 the board. Okay? No test. Everybody gets
22 Tamoxifen, 30 percent response rate. Well,
23 that's okay, but can we do better? Let's get
24 receptors, and now treat according to the
25 results, and see if we change and enrich for
1 response, and eliminate people from what can be
2 toxic therapy because of side effects. And what
3 we find is that if you have estrogen receptor in
4 the tumor, there is a 75 percent response rate.
5 And if you don't have those receptors, there is a
6 10 percent response rate. So my question to you
7 is, can you relate that knowledge and that test
8 to this curve for me?
9 DR. FERGUSON: I am going to take the
10 prerogative of cutting this off right now so the
11 panel can have a discussion. I'm getting hints.
12 DR. WEISENTHAL: Before this, you said
13 that I would have the chance to respond.
14 DR. FERGUSON: I did actually. It's
15 going to have to be very brief.
16 DR. WEISENTHAL: The point I was trying
17 to make, Dr. Burke was implying bad science or
18 sloppy science, or whatever. And also, HCFA in
19 their review specifically excluded the pediatric
20 ALL patients. I just want to make the following
21 very brief points.
22 Firstly, medicine is imperfect.
23 Secondly, medical oncology is inadequate. 70
24 percent of all the treatments that we give don't
25 work. More than half of the chemotherapies for
1 non-FDA approved indications, none of which would
2 stand up to the level of rigor that Dr. Burke is
3 asking here.
4 So I think it's very important that you
5 have to look at the information as a whole.
6 Earlier in my presentation I presented what I
7 call the central hypothesis, and the central
8 hypothesis simply stated is this: You test
9 tumors in vitro, you get a spectrum of responses
10 in vitro, and that the responses in vitro are
11 related in some way to the responses in vivo.
12 I showed you 35 studies which were
13 admittedly, and you know, you got some detail
14 there, of variable quality, and some were very
15 marginal quality, but some, particularly the ones
16 that were excluded, and I don't think there is a
17 meaningful biological reason for excluding
18 pediatric patients, other than they don't get
19 Medicare, but the disease is enough similar.
20 But if you look at the work that was
21 done at the free University of Amsterdam, which
22 was excluded, and I want to tell you about that,
23 that would stand up, I believe, to Dr. Burke's
24 level of rigor. What they did there was they
25 first of all did their training set studies where
1 they did their retrospective analysis. They got
2 their criteria for sensitivity resistance and so
3 forth. And then in a prospective blinded
4 fashion, using those criteria which had been
5 established from the retrospective study, they
6 prospectively tested it in the cooperative group
7 study in a double blinded fashion, published peer
8 reviewed in the journal Blood, which is one of
9 the most rigorously peer reviewed journals
10 around, and it showed absolutely astonishing
11 great results. And you have to include that
12 paper in the context of everything that you've
14 Now, the point that I wanted to
15 conclude with is that if all we're talking about
16 is validating an estrogen receptor, it would be
17 very simple. We're talking about one test, a
18 very common disease, breast cancer. But what we
19 tried to do, beginning 20 years ago, is we said
20 we have this morass, we've got hundreds of
21 diseases, hundreds of potential therapies, which
22 are increasing every year dramatically, and there
23 just has to be some way of matching patient to
25 DR. FERGUSON: Okay. Thank you.
1 OPEN COMMITTEE DISCUSSION
2 DR. FERGUSON: Are there some
4 DR. HELZLSOUER: I would just like to
5 throw out one comment, with the estrogen receptor
6 analogy is that the reason we know all this is
7 because they were done in clinical trials. They
8 were evaluated in clinical trials, and I didn't
9 want to lose that momentum for tomorrow's
11 DR. FERGUSON: Other questions from the
12 panel? Let me just ask a point of order here.
13 (Discussion off the record.)
14 DR. FERGUSON: Mr. Barnes?
15 Mr. BARNES: Actually, I have a
16 question for Dr. Burke. Would it make any sense
17 to go back over data, or would it in fact be too
18 hard or impossible, to do a disease specific
19 analysis based on test by test? I mean, it seems
20 to me that we're all bumping up against the fact
21 that there is a bunch of different tests, and
22 about 30 different types of cancers.
23 DR. BURKE: No, I'm -- I, for all my
24 strong comments, I'm agnostic as to the test
25 itself. I have no opinion on it one way or
1 another. But that is the only way to evaluate
2 the claim that they really want to make.
3 MR. BARNES: Right. But what I mean
4 is, using the data that are either in the
5 articles, or could the data be generated some
6 other way, to reevaluate.
7 DR. BURKE: Well, let me make two brief
8 comments. One is, it's striking that there isn't
9 a large cohort study for a particular disease,
10 which is what one would expect, given the
11 frequency with which this test seems to be done.
12 But number two, yes. For some diseases, for
13 example CLL, it may in fact be the case that
14 there is sufficient evidence, okay, to evaluate a
15 particular test for that particular disease. And
16 the reason why I say particular disease is
17 because diseases are kind of strange, as you well
18 know and I well know, and CLL is a very strange
19 disease, but it has its own characteristics
20 associated with it. And so yes, you'd want to
21 look at CLL in terms of CLL. Why CLL? You're
22 saying well, for this test, given the
23 characteristics of CLL as a disease, does this
24 test help us? In early stage disease, in late
25 stage disease, for particular treatments, if
1 there are effective treatments, because remember,
2 for therapy significant prognostic factors, if
3 there is no effective treatment, then there is no
4 need for therapy specific prognostic factors.
5 MR. BARNES: Right. Well, let me ask
6 my question a different way. Of the 12 studies,
7 or 13 or 14 or whatever they are, is there a way
8 to go back to them and dissect out the
9 histologies, CLL or whatever, according to test
10 result, specific test by test, and get data? So
11 in other words, do you think that anyone, not
12 necessarily you, could go back to the actual
13 publications and dissect that out?
14 DR. BURKE: It depends on the
15 publication, it depends on the study. Some yes,
16 some no. It depends on the adequacy of the
17 study. Some studies you, like retrospective
18 studies, it would be very difficult to do that.
19 Prospective studies that were done properly would
20 be much easier to do, because you would have
21 complete information, which you most of the time
22 don't have in retrospective. But yes, it could
23 be done, if the data were there.
24 MR. BARNES: I'd just like to add a
25 couple of comments as well. Many of the studies
1 on solid tumors and even hematologic tumors or
2 mixture of tumors, and the studies do not specify
3 histologic subtypes, and it becomes very
4 difficult to create a laundry list of studies by
5 disease type. I think if you go through the
6 handout from this presentation, you will see that
7 unfold, because I do specify the, you know,
8 whether it's mixed or what tumor types it is, you
9 know, and sometimes it's just very difficult.
10 DR. FERGUSON: Other questions from the
11 panel? I have a question I'll throw out. It
12 seems that in a number of the papers that we've
13 seen, when there were comparison groups that they
14 were, if there were patients whose cancer was
15 sensitive to a drug, they were given that drug,
16 whereas the other quote, control group, was one
17 who showed resistance, and those were allowed to
18 have physician's choice in the chemotherapy. Now
19 that seems to me to bias the two groups if the
20 test has any validity at all, so that they really
21 aren't comparable. That is, the ones with the
22 drug, the cancer showed sensitivity, were treated
23 by guidance from that test, whereas the ones that
24 didn't or were resistant, were treated by
25 physician's choice. And then one -- that group
1 comes out worse, and why wouldn't we expect
2 that? And I guess I'm asking for a comment or an
3 explanation for why that's the best thing to do,
4 because it does not seem to me to be the best
5 thing to do. Yes?
6 DR. HOFFMAN: My name's Robert
7 Hoffman. We performed such a prospective study.
8 I think in the previous retrospective studies we
9 showed very extensive correlation between
10 survival and response in the drug response assay,
11 in our case the histoculture drug response
12 assay. So we then designed a trial as you
13 mentioned, comparing outcome of patients who were
14 treated by assay guided therapy if their tumors
15 were responsive in the assay, to clinician's
16 choice in the resistant patients.
17 I think it would have been unethical to
18 treat the resistant patients with the resistant
19 drugs as a matter of course. So that was, I
20 think, the criteria in our study. Of course the
21 next step, I think, would be an absolute
22 randomized trial where you separate the patients
23 beforehand, but I think if knowing someone is
24 resistant, given not only the data from our
25 studies, but the very very extensive data
1 presented by the other groups here, I think it's
2 not, and respectfully in my opinion, it's not
3 ethical to treat with a resistant drug.
4 DR. FERGUSON: Are there any other
5 questions or comments? Go ahead, Dr. Klee.
6 DR. KLEE: The study that Dr. Bosanquet
7 alluded to, at one point in the presentation they
8 were talking about this MRC study, I guess is
9 ongoing, the randomization for, which really is
10 randomizing against use of the drug testing
11 versus not using the drug testing. Does that --
12 that seems like a rather fundamental type study,
13 and I was surprised that hadn't been done
14 earlier, it's ongoing now, but it would be sort
15 of the basis of much of the clinical trial work
16 that's been done on a lot of the therapeutic side
17 of things, so it just surprises me that there was
18 no published study along that line. And
19 apparently there are numerous difficulties in
20 trying to carry that out, but I don't know why
21 that hasn't been done or what has precluded doing
23 DR. FERGUSON: Yes, Dr. Weisenthal?
24 DR. WEISENTHAL: As one who
25 participated in the design and funding of such
1 studies, what I have to tell you, it's one of
2 these things that's easier said than done. In
3 1985 I had a large grant from the VA, had 31 VA
4 hospitals, it was a cooperative VA study in
5 multiple myeloma, standard therapy versus assay
6 directed therapy. It was several years in
7 planning, we had two national investigators
8 meetings, one was held here in Baltimore. A
9 tremendous amount of work and everything went
10 into that. What happened was that eight months
11 into the study, accrual was running only about
12 one-fourth of what had been projected, they
13 decided that the study just would not be ever
14 completed and so it was cancelled.
15 Subsequently, we got a study going in
16 the Eastern Cooperative Oncology Group, which was
17 to lead to a randomized trial in non-small cell
18 lung cancer. Again, in the first six months the
19 study accrued six patients, although we had 51
20 hospitals eligible to contribute patients, and
21 that was closed.
22 And I keep mentioning Dan von Hoff,
23 who's the most energetic effective clinical
24 trials organizer I've ever seen, tried several
25 times, and never completed a single prospective
1 randomized trial. It's just much easier than
2 said, for all sorts of reasons, that we could
3 discuss over a margarita.
4 DR. FERGUSON: Thank you. Yes, Dr.
6 DR. BURKE: The cooperative groups and
7 other randomized trialists are collecting frozen
8 tissue, and an issue that may be available to
9 you. I mean, they know the different treatments,
10 they know the outcomes, they have snap frozen
11 tissue. The question is, can these assays be
12 done on snap frozen tissue, because if they
13 could, the outcome is already known.
14 DR. FRUEHAUF: It would be really
15 wonderful if we could use frozen tissue for the
16 assays, and this is really one of the technical
17 issues of doing a prospective randomized study,
18 and we did this with the GOG. GOG-118 was a
19 prospective study, wasn't randomized, but to be
20 obtain fresh tissue at surgery, send it to the
21 laboratory, and I am a member of SWOG, and I
22 attend GOG meetings, I'm a member of ASCO, and I
23 can tell you that tissue banks are a great idea,
24 but they haven't really reached fruition because
25 of the logistical problems of moving tissue from
1 one place to another is very difficult. And so
2 we have not -- you can't use snap frozen tissue;
3 it has to preserved in a live state in media, and
4 transported so it gets there within 24 hours.
5 DR. FERGUSON: Thank you. Other
6 questions from the panel? Yes, Dr. Helzlsouer?
7 DR. HELZLSOUER: I have a question in
8 terms of these assays, and I'm having a little
9 trouble lumping them all together. But is it my
10 understanding that there is only maybe one that
11 tests combinations routinely, all the rest are
12 single chemotherapy assays?
13 DR. FRUEHAUF: The question of single
14 agents and combinations is kind of a tempest in a
15 teapot in a way. Every lab that I know of tests
16 drug combinations. We test drug combinations at
17 Oncotech, AntiCancer tests drug combinations, Dr.
18 Nagourney tests drug combinations, Dr. Weisenthal
19 tests drug combinations. I think one of the
20 issues that is fundamental is, is there drug
21 synergy, and if you don't test two drugs two
22 together where there could be synergy, what are
23 you going to miss in the information? And this
24 goes to the issue of why we use multi-agent
25 therapy. You are an oncologist and an
1 epidemiologist, and I'm sure your thinking like
2 many oncologists is, we use multi-agent
3 chemotherapy because of the gold ecomin
4 hypothesis, that there are multiple subsets
5 within each tumor that are differentially
6 sensitive to different agents in the
7 combination. So when platinum was added to
8 testicular chemotherapy regimens, that additional
9 activity killed a subset that was there
10 microscopically. Even though people had CRs,
11 they weren't surviving.
12 So single agents should be active in
13 combinations. So our view is, if you test a
14 single agent and it can't reach its drug target,
15 there's extreme resistance to that single agent,
16 and it can't reach its target because of protein
17 or rapid adduct repair or what have you, that
18 single agent isn't going to add a synergistic
19 effect in the absence of its own effect. So,
20 Dr. DeVita, in the third edition of Principles
21 and Practices, made a statement in his chapter on
22 chemotherapy that combinations should always be
23 made up of active single agents. And so we look
24 for single agent activity against the cancer in
25 the salvage setting, and then we'll move this
1 agent up into the adjuvant setting, when it's
2 been proven to have activity.
3 So single agent testing is predicated
4 on finding out if the single agent would have no
5 benefit as a single agent, it's unlikely then
6 that it would have benefit in a combination, but
7 we all test combinations.
8 DR. FERGUSON: But let me -- as I
9 recall reading these papers, that none of them
10 actually routinely were testing two agents
11 simultaneously on one. I mean, the majority of
12 the papers that we read and that Dr. Burken
13 presented single agents. Maybe serially they
14 would test several agents, but not together in
15 one Petri dish routinely.
16 DR. FRUEHAUF: Yes. I think that
17 Dr. Nagourney presented evidence, and I will let
18 them speak, but just for our role, we tested the
19 concept of whether single agent testing was
20 predictive in combination therapy in breast
21 cancer. So we took the single agents and looked
22 at their activity as single agents, and added up
23 their scores as I presented this morning, and
24 that was predictive of how the person did in
25 response to the combination. Now other people
1 have tested combinations as well, and I'm sure
2 they will comment on that.
3 DR. FERGUSON: Okay. Was I misreading
4 these papers?
5 DR. HELZLSOUER: That's the same way, I
6 interpreted them the same way, they were all
7 single agent tests, and not combinations.
8 DR. FERGUSON: Yeah. I mean, all the
9 published stuff we saw was single agent.
10 DR. HANDELSMAN: The bulk of it was,
11 but not all of it.
12 DR. FERGUSON: Okay. Yes?
13 DR. HOFFMAN: Technically, to test
14 combinations is entirely feasible. We're dealing
15 with most of the tests with culture dishes,
16 culture wells, with medium. You can add one
17 drug, two drugs --
18 DR. FERGUSON: I don't disagree with
20 DR. HOFFMAN: You can add ten drugs.
21 Most of the studies have, as has been mentioned,
22 have focused on single drugs to understand their
23 individual activity. We've done a study as yet
24 unpublished that shows predictivity to the
25 combination treatment for ovarian cancer as
1 predicted by Cisplatin alone, but to mix drugs in
2 the cultures is technically trivial.
3 DR. NAGOURNEY: Yeah, if I might just
4 address that. Actually we specifically do focus
5 on drug combinations, and as Dr. Fruehauf alluded
6 to, most drug combinations are basically
7 additive, and in some cases subadditive or
8 antagonistic. There are a small number of
9 combinations that are genuinely truly
10 synergistic, and which are extremely attractive
11 and interesting as therapists. One of the most
12 attractive are the interactions between
13 alkylating agents or platinum, and
14 antimetabolites, a couple of examples of which
15 were cited in some things we referenced, one
16 paper in the British Journal of Cancer,
17 indicating true synergy between alkylating agents
18 and CDA, and that observation has now resulted in
19 a 100 percent response rate in an ECOG trial.
20 Similarly, Cisplatin and Gemcitabine as
21 a related combination in solid tumors is
22 presenting us with really one of the most active
23 combinations we've ever seen in medical oncology,
24 but those are actually pretty rare. So I think
25 for the most part, most drugs are intelligently
1 given as single agents, but there are a few very
2 beautiful examples of synergy, and they can be
4 DR. FERGUSON: Is this in response to
6 DR. KERN: Yes. Just briefly. In the
7 now famous, or perhaps infamous Kern and
8 Weisenthal paper of 1990, we had a cohort of 105
9 patients that were treated with combinations, and
10 we showed that --
11 DR. FERGUSON: I don't doubt that the
12 patients are treated with combinations. The
13 issue was, was the test done with two drugs?
14 DR. KERN: That's correct. All the
15 drugs were tested singly and in combination in
16 the laboratory, and correlated with the clinical
17 with the clinical response.
18 DR. FERGUSON: That wasn't clear, at
19 least to me.
20 DR. KERN: I understand. It's in Table
21 5 of that paper. Thank you.
22 DR. HELZLSOUER: Another concern I have
23 which hasn't been addressed, and we didn't really
24 have it in our packet, were the reproducibility
25 issues of these tests. Then I just heard that
1 you have to have the fresh tissue within the lab
2 within 24 hours, and this may need some
3 clarification. Also, we're dealing with home
4 brews that are being done in certain labs, so the
5 tissue has to go to that lab, there won't be
6 kits. So what will be the accessibility of
7 this? Not just -- so we have the reproducibility
8 issue in doing that, but then in general, how
9 would these be able to be done if there is only a
10 few labs doing these?
11 DR. FRUEHAUF: Well, I think that if
12 there's a favorable decision today, there will be
13 many more labs doing this.
14 DR. HELZLSOUER: Well then, I would
15 like to have more information even yet on
17 DR. FRUEHAUF: Yeah. The
18 reproducibility thing is very important. And
19 we're inspected by the College of American
20 Pathologists to fulfill CLIA regulations, and we
21 have to show precision, we have to show
22 sensitivity and specificity, and we do that by
23 looking at thousands of cases in our database to
24 show that the population patterns remain
1 Most of the laboratories, and we work
2 by getting specimens from all over the country,
3 and we set up a system where Federal Express
4 takes the specimen immediately after surgery and
5 brings it to our laboratory. The other labs use
6 similar courier processes. So it's not that it's
7 hard to get a motivated person in the pathology
8 department to send the specimen.
9 DR. HELZLSOUER: Let me ask you this,
10 about your reproducibility studies. So they're
11 done using your known samples, so you have your
12 known controls; is that what you're saying within
13 your lab?
14 DR. FRUEHAUF: That's correct.
15 DR. HELZLSOUER: Is that what the
16 regulations are? My experience with that, with
17 dealing with laboratories, is that's usually not
18 very reproducible when you're dealing with sent
19 specimens that you do not know. So I wonder if
20 those studies have been done in these assays to
21 determine for samples unknown to you --
22 DR. FRUEHAUF: Yes.
23 DR. HELZLSOUER: Sent specimens.
24 DR. FRUEHAUF: That was done. SWOG did
25 a study in the '80s where they looked at
1 concordance between laboratories, and they sent
2 the same specimens to different laboratories.
3 And they found a concordance level of about 80
4 percent between laboratories for the same result,
5 and I think that is really significant
6 considering the variability of biological
8 DR. BURKE: (Inaudible).
9 DR. FRUEHAUF: It's from a book, Tumor
10 Cloning Assays, that was published. Somebody
11 might know that better that I do.
12 DR. BURKE: My question was, did you
13 have a citation on that.
14 DR. FRUEHAUF: I can provide that to
15 you after the meeting.
16 DR. BURKE: Because I share your
17 concern about -- I mean even the most common
18 tests have difficulty, most prognostic factor
19 tests have a great deal of problems with
20 reproducibility. CAP has been trying to do
21 standardization for years in this area, on even
22 automated type tests, and it's very, very
23 difficult to do.
24 DR. FRUEHAUF: I can tell you what
25 we've done. We have cell lines that we study.
1 We have 25 different cell lines with
2 characterized drug response patterns. And we
3 send these as unknowns into the laboratory on a
4 periodic basis, to make sure that every day,
5 every week when we're running the assays, we are
6 getting the appropriate result for these cell
7 lines, which are unknown to the people in the lab
8 who are doing the assay. So we have an internal
9 validation process with 15 to 20 cell lines that
10 we run routinely to validate the Cisplatin result
11 is appropriate for the ovarian cell line, for
12 instance; that the adreomyecin result is
13 appropriate for the breast cancer cell line, and
14 this is an internal validation process which, we
15 use the same one for doing markers, for doing
16 HRCC New, and P-53 as phase for actions, where
17 you have to have internal validations you run in
18 your laboratory to confirm that every time you're
19 running the test, you're getting the same
20 expected results.
21 DR. KASS: Could I ask a follow-up
22 question on that? In that particular study that
23 you referenced, one thing that was of interest to
24 me, we have seen lots of different types of
25 laboratory tests, and I was wondering if in that
1 study they addressed the results comparing the
2 different types of assays that we have heard
3 referred to today, have any studies been done to
4 look at the comparability of the DiSC versus the
5 MMT, versus whatever?
6 DR. FRUEHAUF: Yes, and I think that
7 other people do this all the time. Dr.
8 Weisenthal does three separate assays on each
9 specimen that comes into his laboratory. What we
10 did for GOG 118 internally, we ran a DiSC assay,
11 and we ran an EDR assay on the same specimen, and
12 we looked at the cut points of low, intermediate
13 and extreme resistance, and we found that they
14 were exactly concordant with a very small, one to
15 two percent difference. So, the cut points are
16 very important, reproducibility is very
17 important, but all the people who are doing this
18 have been doing this for 15 or 20 years and
19 have -- there was an NCI consensus conference in
20 the '80s that addressed these specific issues of
21 quality control, because this all stems from the
22 NCI funding these laboratories originally to
23 develop this technology. And it was partly done
24 for drug discovery and it was partly done for
25 helping patients get the right therapy. So the
1 consensus conference looked at the issues of
2 coefficient of variation, out wires, how many
3 standards you needed to run with each assay, et
4 cetera. And they set up a profile of quality
5 control requirements internally in the laboratory
6 that would be necessary, and they compared the
7 different laboratories that were doing the
8 testing, so that there would be a uniformity of
9 process. And so we incorporated into our
10 laboratory procedures those quality assurance and
11 quality control measures, along with the internal
12 standards being run all the time. And what we
13 are doing now is using these cell lines to send
14 to the other labs as proficiency tests, because
15 we have to have proficiency tests to maintain the
16 quality assurance.
17 DR. FERGUSON: Thank you. Very briefly,
18 Dr. Kern.
19 DR. KERN: Yeah, very briefly. There
20 was a study published by NCI a few years ago
21 where we compared four laboratories, UCLA lab,
22 Sid Salmon's lab in Arizona, Dan von Hoff in
23 Texas, and Mayo Clinic, Dr. Liebe's lab, they
24 were all sent -- all labs were sent 20 compounds,
25 blinded, coded. Most of them were anticancer
1 drugs; some of them included sugar and salt. And
2 we published on the very close reproducibility of
3 all four laboratories. I can provide you that
5 DR. FERGUSON: Thank you. Dr. Loy?
6 DR. LOY: I just wanted to ask a
7 question that remains in my mind, and that is,
8 when is the optimal time to biopsy? Certainly
9 you would expect tumor biology to change after,
10 or posttreatment, whether it be chemotherapy or
11 radiation, and I'm just wondering if there's any
12 studies to talk about or clarify when the most
13 appropriate time to biopsy is, and if there's any
14 predictive value in testing those tumors that
15 have not previously been treated.
16 DR. ROBINSON: My name is William
17 Robinson. I'm with the U.S. Harvest Medical
18 Technologies Corporation. We didn't send
19 literature to the panel, but we did get in on
20 this at the end, thank the Lord. One thing we
21 wanted to draw reference to was that question
22 about timing, because according to a research
23 paper that came out of NIH in 1981, they felt the
24 most appropriate time was within the first four
25 hours of biopsy, because I think according to
1 Dr. Wing, that the gethaco protein does get
2 inducted very early on, so therefore, you don't
3 get a real response, a clear response to what the
4 tumor looks like in vivo as opposed to what you
5 actually see in the Petri dish. Some of the
6 literature we sent actually does show you, for
7 those who can actually see this, that we were
8 able to pick up a metabolism very early on,
9 within about minutes. So if it's a case where
10 you're going to compare MTT tests and the DiSC
11 tests, I think the idea is you want to get the
12 tumor in the closest condition that it appears
13 naturally, so as far as automation is concerned,
14 and that's where we come in, we think that this
15 is the kind of tool and the kind of forum for
16 discussion as to how you combine therapies, that
17 this makes this a very good and useful meeting.
18 Thank you.
19 DR. LOY: Thank you for that, but my
20 question was more directed towards when in the
21 course of the history of the disease, is it
22 pretreatment or posttreatment?
23 DR. FRUEHAUF: Acquired resistance is
24 an important question, so that if somebody has a
25 biopsy and you get a result and you treat the
1 patient, and then the patient's failed primary
2 therapy and you want to go back to your result.
3 The question is, is that result still valid to
4 treat the patient now, who's had intervening
5 therapy? Is that part of your question?
6 DR. LOY: That is part of my question,
7 but please address that issue.
8 DR. FRUEHAUF: So first, of course, you
9 have two kinds of variability up front in newly
10 presenting patients; you've got sit, inter-site,
11 and for synchronous lesions, and so we studied in
12 paired cases synchronous lesions and metachronous
13 lesions. And we looked at extreme resistance
14 frequencies for the various drugs, between sites
15 and over time, for ovarian cancer. We presented
16 this at AACR. We found that there is a very low
17 frequency of a two-drug category shift, of about
18 5 percent, in terms of synchronous lesions. So
19 if you looked at platinum resistance in an
20 ovarian cancer patient and you compared the
21 primary ovary with the peritoneal metastases,
22 only 5 percent of the time was there a
23 significant difference in the result. It went up
24 to about 8 percent when it was over time, so the
25 difference over time -- now, I think the key is,
1 there's not a lot of heterogeneity and change in
2 resistance patterns, but there can be a decrease
3 in sensitivity, so that if you're using the assay
4 to identify ineffective agents, an agent that's
5 ineffective initially, after intervening therapy,
6 was still inactive later. It was a loss of
7 sensitivity that was occurring. So there is a
8 robust ability to say if the drug wasn't going to
9 work up front, it's unlikely after failure or
10 progression that that drug is now going to work
11 in the relapse setting.
12 DR. LOY: Have you found the same thing
13 or have studies been shown to show the same thing
14 to be characteristic of hematologic malignancies,
15 which are known to transform after chemotherapy?
16 DR. FRUEHAUF: I would leave that to
17 one of my friends who does this research on that.
18 DR. FERGUSON: Mitch, do you have
20 DR. BURKEN: Just a quick comment on a
21 study. Just -- I didn't get it in before. The
22 issue came of up of concordance or discordance
23 between different assay formats. And you know,
24 there have been several studies; as a matter of
25 fact, some of them were listed this morning. One
1 of the studies, I'm not sure whether it was
2 listed or not, was by Tavassol in Oncology in
3 1995, where there were 17 patients that had head
4 to head FCA and EDR, and there was some
5 discordance. At least 12 of the 17 patients had,
6 or 12 of the 17 patients had at least two drugs
7 that had different patterns. The problem with
8 those kinds of studies, as I said, you run up
9 against complicating factors like the tumor
10 heterogeneity that we talked about earlier, where
11 the differences may be due to the fact that
12 there's just intrinsic tumor heterogeneity. And
13 so, it does open up I think another vista of ways
14 of looking at test accuracy.
15 DR. FERGUSON: Dr. Bosanquet, did you
16 have some response?
17 DR. BOSANQUET: Can I address a couple
18 of these points? We very early on looked at
19 different biopsy sites for the hematologics, and
20 compared drug sensitivity. So we looked at
21 blood, bone marrow, lymph node, and found almost
22 identical drug sensitivity from those three
23 sites, in CLL and non-Hodgkins lymphoma, similar
25 We have also -- I also concur in the
1 ovarian data that John Fruehauf has just
2 mentioned. We also in our laboratory find almost
3 identical results between a situs and a primary
4 tumor in the ovarian setting.
5 The point was raised about the timing
6 of the biopsies. In 1988 we published a paper in
7 Cancer, which hasn't been mentioned, in which we
8 looked at drug sensitivity before and after an
9 intervening period of time. If there was no
10 intervening chemotherapy, there was no difference
11 in drug sensitivity from one to the subsequent
12 test. If there was intervening chemotherapy that
13 was not the drug that you were testing -- I'm
14 sorry -- if there was intervening chemotherapy,
15 for instance, with Doxorubicin, and you looked at
16 the difference in chlorambucil sensitivity before
17 and after the Doxorubicin, there was usually a
18 slight increase in resistance, and chlorambucil
20 If you looked at the drug that had been
21 given in between, so you tested Doxorubicin, then
22 you gave Doxorubicin, then you tested Doxorubicin
23 again, you saw a greater increase in resistance
24 between the two tests. There was one anomaly to
25 this finding, this universal finding, which is
1 becoming a standard chemotherapy in CLL in
2 Britain. And that is that we found that if
3 patients were treated with chlorambucil, and this
4 is just in CLL, if patients were treated with
5 chlorambucil, they became 10-fold more sensitive,
6 or there or cells became 10-fold more sensitive
7 to the steroids. And this is an anomalous
8 finding, which is really quite exciting. And if
9 you look in the original literature on steroids,
10 not much use in untreated CLL. But we found
11 them, high does methylprednisolone for instance,
12 to be very effective in previously treated CLL,
13 supporting this finding from the laboratory. So
14 that's, as far as I'm aware, the only time that
15 increased sensitivity is induced by treatment.
16 DR. FERGUSON: Other questions from the
17 panel members or comments? If not, we will
18 reconvene tomorrow morning at 8:00.
19 (The panel adjourned at 4:33 p.m.,
20 November 15, 1999.)