(This is part 1 of a 3-part series on Inflammatory Disorders Studies. View part 2 here.View the complete series in our Inflammation eBook.)
One of the confounding factors in clinical studies that can contribute to difficulty in discriminating an active treatment effect versus placebo is subject eligibility creep when subjects (e.g. with milder forms of disease severity at baseline) may get enrolled inappropriately by sites when struggling to meet recruitment targets and timelines. Baselines are skewed and misrepresented since subjects initially may be assessed as suffering from the more severe disease grades required to meet inclusion criteria.
The inclusion of de facto milder patients can make it more difficult to observe a treatment difference versus placebo and is, unfortunately, likely to place the trial at significant risk of failure. Multiple other factors also play a role in the so-called placebo response; placebo response rates of 14 to 20% have been observed in trials such as psoriasis, and nearly 30% in placebo-controlled studies in rheumatoid arthritis. Even higher rates have been observed in ulcerative colitis studies. High placebo response can result in a failure to observe treatment effects and place otherwise effective drugs at risk of the study not meeting the primary endpoint.
In addition, Immune-Mediated Inflammatory Disorders (IMIDs) have a tendency to have unpredictable, chronic remitting and relapsing patterns. This makes it crucial to confirm severity and ensure stable disease at baseline on at least two separate assessments and with follow up treatment evaluations conducted by the same evaluator throughout the study to control standardization and mitigate assessment variability. Specific training of site staff who will be performing certain study assessments, such as ACR20 assessments in rheumatoid arthritis studies, is critical. The training is a way to reduce variability of patient assessments across sites and regions and ensure greater standardization across the study to provide more robust data. Blinded, standardized, centralized reading of some assessments, e.g. long-term radiographic evaluations, is likewise a valuable tool to help minimize bias and endpoint variability.
The key to combating eligibility creep is to bear these challenges and the natural history of the disease when recruiting subjects and proactively putting in place the measures aimed at improving the quality of enrollment and the rigorous standardization of baseline and endpoint assessments.