Blog Post
Adaptive Design Series: Why is doing dose escalation studies so hard?
July 11, 2012
Note: This article is one of a series about adaptive design that come from a blog written by Dr. Karen Kesler from 2010 to 2011. That blog is no longer active, but it contained some great information, so we wanted to re-post it here.
On the surface, dose escalation studies are some of the most intuitive studies around—heck, everybody thinks they can run a traditional 3+3 design—“Just dose three people and if nobody has a toxicity, increase the dose for the next three—if two of the three have a toxicity, stop the study and declare a winner.” Yes, I’m exaggerating, but my point is that for an adaptive design that everybody understands, I’ve been really struggling with dose escalation designs in recent years and I don’t think I’m being dense about it. The problem I’m finding is in the basic principle of defining a “toxicity”. If you work in oncology, you can just stop reading now, you guys have this down to a science—chemotherapeutic agents by their very mechanism of action cause specific types of “bad things” to happen (e.g. neutropenia, infections, etc.) so toxicities are predictable and easy to define. Moving out of that well-defined realm, however, toxicities can be hard to define. Since the toxicity needs to relate directly to the therapy under consideration, if the mechanism of action or the consequences are not known (not an unusual situation in drug development, btw) how do you define it? We can fall back on outcomes that are typical for the clinical area, but how do we know that the drug is causing the toxicity and that it wouldn’t happen anyways? With only three or four subjects in each dosing cohort, one or two events can have a huge impact on the conduct of the study and therefore the determination of the “best” dose.
Perhaps an illustration is in order—take my favorite clinical area, sickle cell disease. Say we’re trying to treat patients who are having an acute vaso-occlusive crisis (VOC). (Lots of blood cells sickling and sticking together, causing a huge amount of pain, and doing all sorts of organ damage.) We have a good idea of whether our new therapy works because people get out of the emergency department or hospital faster. But how do we define a toxicity? Researchers have made huge strides in understanding the mechanism of a VOC over the years, but I can assure you that we only get the tip of the iceberg. On top of that, there’s no other compounds to actually treat this situation (patients only get palliative pain therapy) so we don’t have any experience with seeing how other compounds work. Bottom line–we’re flying blind here in terms of mechanism. To define our toxicity, we could choose some typical adverse events that occur in these patients, like acute chest syndrome. But if the compound doesn’t affect those events, that’s building an entire study on the poor foundation of an irrelevant endpoint. We could go general and choose any bad event of a sufficient magnitude—the “Any AE of Grade III or IV” option. But that puts our study at the mercy of random (or not so random given how sick these patients are) bad adverse events.
My challenge to you—tell us how you’ve dealt with this situation. You don’t have to give any trade secrets away, just describe the clinical area, what the expected effects of the compound were (if any) and how you chose a definition of “toxicity”. Maybe we’ll all learn something.