• 已选条件:
  • × McDermott, Michael P.
  • × Review
  • × 2015
 全选  【符合条件的数据共:4条】

JOURNAL OF PAIN,,162015年

McKeown, Andrew, Gewandter, Jennifer S., McDermott, Michael P., Pawlowski, Joseph R., Poli, Joseph J., Rothstein, Daniel, Farrar, John T., Gilron, Ian, Katz, Nathaniel P., Lin, Allison H., Rappaport, Bob A., Rowbotham, Michael C., Turk, Dennis C., Dworkin, Robert H., Smith, Shannon M.

LicenseType:Free |

预览  |  原文链接  |  全文  [ 浏览:0 下载:0  ]    

Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. Perspective: In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size. (C) 2015 by the American Pain Society

    JOURNAL OF PAIN,,162015年

    Smith, Shannon M., Hunsinger, Matthew, McKeown, Andrew, Parkhurst, Melissa, Allen, Robert, Kopko, Stephen, Lu, Yun, Wilson, Hilary D., Burke, Laurie B., Desjardins, Paul, McDermott, Michael P., Rappaport, Bob A., Turk, Dennis C., Dworkin, Robert H.

    LicenseType:Free |

    预览  |  原文链接  |  全文  [ 浏览:0 下载:0  ]    

    Pain intensity assessments are used widely in human pain research, and their transparent reporting is crucial to interpreting study results. In this systematic review, we examined reporting of human pain intensity assessments and related elements (eg, administration frequency, time period assessed, type of pain) in all empirical pain studies with adult participants in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and Pain) between January 2011 and July 2012. Of the 262 articles identified, close to one-quarter (24%) ambiguously reported the pain intensity assessment. Elements related to the pain intensity assessment were frequently not reported: 31% did not identify the time period participants were asked to rate, 43% failed to report the type of pain intensity rated, and 58% did not report the specific location or pain condition rated. No differences were observed between randomized clinical trials and experimental (eg, studies involving experimental manipulation without random group assignment and blinding) and observational studies in reporting quality. The ability to understand study results, and to compare results between studies, is compromised when pain intensity assessments are not fully reported. Recommendations are presented regarding key details for investigators to consider when conducting and reporting pain intensity assessments in human adults. Perspective: This systematic review demonstrates that publications of pain research often incompletely report pain intensity assessments and their details (eg, administration frequency, type of pain). Failure to fully report details of pain intensity assessments creates ambiguity in interpreting research results. Recommendations are proposed to increase transparent reporting. (C) 2015 by the American Pain Society

      JOURNAL OF PAIN,,162015年

      Gewandter, Jennifer S., McKeown, Andrew, McDermott, Michael P., Dworkin, Jordan D., Smith, Shannon M., Gross, Robert A., Hunsinger, Matthew, Lin, Allison H., Rappaport, Bob A., Rice, Andrew S. C., Rowbotham, Michael C., Williams, Mark R., Turk, Dennis C., Dworkin, Robert H.

      LicenseType:Free |

      预览  |  原文链接  |  全文  [ 浏览:0 下载:0  ]    

      Peer-reviewed publications of randomized clinical trials (RCTs) are the primary means of disseminating research findings. Spin in RCT publications is misrepresentation of statistically nonsignificant research findings to suggest treatment benefit. Spin can influence the way readers interpret clinical trials and use the information to make decisions about treatments and medical policies. The objective of this study was to determine the frequency with which 4 types of spin were used in publications of analgesic RCTs with nonsignificant primary analyses in 6 major pain journals. In the 76 articles included in our sample, 28% of the abstracts and 29% of the main texts emphasized secondary analyses with P values <.05; 22% of abstracts and 29% of texts emphasized treatment benefit based on nonsignificant primary results; 14% of abstracts and 18% of texts emphasized within-group improvements over time, rather than primary between-group comparisons; and 13% of abstracts and 10% of texts interpreted a nonsignificant difference between groups in a superiority study as comparable effectiveness. When considering the article conclusion sections, 21% did not mention the nonsignificant primary result, 22% were presented with no uncertainty or qualification, 30% did not acknowledge that future research was required, and 8% recommended the intervention for clinical use. Perspective: This article identifies relatively frequent spin in analgesic RCTs. These findings highlight a need for authors, reviewers, and editors to be more cognizant of how analgesic RCT results are presented and attempt to minimize spin in future clinical trial publications. (C) 2015 by the American Pain Society. Published by Elsevier Inc. All rights reserved

        JOURNAL OF PAIN,,162015年

        Singla, Neil, Hunsinger, Matthew, Chang, Phoebe D., McDermott, Michael P., Chowdhry, Amit K., Desjardins, Paul J., Turk, Dennis C., Dworkin, Robert H.

        LicenseType:Free |

        预览  |  原文链接  |  全文  [ 浏览:0 下载:0  ]    

        The magnitude of the effect size of an analgesic intervention can be influenced by several factors, including research design. A key design component is the choice of the primary endpoint. The purpose of this meta-analysis was to compare the assay sensitivity of 2 efficacy paradigms: pain intensity (calculated using summed pain intensity difference [SPIN) and pain relief (calculated using total pain relief [TOTPAR]). A systematic review of the literature was performed to identify acute pain studies that calculated both SPIDs and TOTPARs within the same study. Studies were included in this review if they were randomized, double-blind, placebo-controlled investigations involving medications for post-surgical acute pain and if enough data were provided to calculate TOTPAR and SPID standardized effect sizes. Based on a meta-analysis of 45 studies, the mean standardized effect size for TOTPAR (1.13) was .11 higher than that for SPID (1.02; P=.01). Mixed-effects meta-regression analyses found no significant associations between the TOTPAR SPID difference in standardized effect size and trial design characteristics. Results from this review suggest that for acute pain studies, utilizing TOTPAR to assess pain relief may be more sensitive to treatment effects than utilizing SPID to assess pain intensity. Perspective: The results of this meta-analysis suggest that TOTPAR may be more sensitive to treatment effects than SPIDs are in analgesic trials examining acute pain. We found that standardized effect sizes were higher for TOTPAR compared to SPIDs. (C) 2015 by the American Pain Society