(C) PLOS One [1]. This unaltered content originally appeared in journals.plosone.org. Licensed under Creative Commons Attribution (CC BY) license. url:https://journals.plos.org/plosone/s/licenses-and-copyright ------------ Selective publication of antidepressant trials and its influence on apparent efficacy: Updated comparisons and meta-analyses of newer versus older trials ['Erick H. Turner', 'Behavioral Health', 'Neurosciences Division', 'Veterans Affairs Portland Health Care System', 'Portland', 'Oregon', 'United States Of America', 'Department Of Psychiatry', 'Oregon Health', 'Science University'] Date: 2022-01 Using logistic regression, we examined the effects of trial outcome and trial cohort (newer versus older) on transparent reporting (whether published and FDA conclusions agreed). Among newer antidepressants, transparent publication occurred more with positive (15/15 = 100%) than negative (7/15 = 47%) trials (OR 35.1, CI 95% 1.8 to 693). Controlling for trial outcome, transparent publication occurred more with newer than older trials (OR 6.6, CI 95% 1.6 to 26.4). Within negative trials, transparent reporting increased from 11% to 47%. Using medical and statistical reviews in FDA drug approval packages, we identified 30 Phase II/III double-blind placebo-controlled acute monotherapy trials, involving 13,747 patients, of desvenlafaxine, vilazodone, levomilnacipran, and vortioxetine; we then identified corresponding published reports. We compared the data from this newer cohort of antidepressants (approved February 2008 to September 2013) with the previously published dataset on 74 trials of 12 older antidepressants (approved December 1987 to August 2002). Valid assessment of drug efficacy and safety requires an evidence base free of reporting bias. Using trial reports in Food and Drug Administration (FDA) drug approval packages as a gold standard, we previously found that the published literature inflated the apparent efficacy of antidepressant drugs. The objective of the current study was to determine whether this has improved with recently approved drugs. Competing interests: I have read the journal’s policy and the authors of this manuscript have the following competing interests: ET previously worked as a Medical Officer for the US Food and Drug Administration (FDA) and reviewed applications submitted by pharmaceutical companies to determine they should be approved for US marketing. He has no financial interest in any pharmaceutical products, approved or otherwise. AC is supported by the National Institute for Health Research (NIHR) Oxford Cognitive Health Clinical Research Facility, by an NIHR Research Professorship (grant RP-2017-08-ST2-006), by the NIHR Oxford and Thames Valley Applied Research Collaboration and by the NIHR Oxford Health Biomedical Research Centre (grant BRC-1215-20005). He has received research and consultancy fees from INCiPiT (Italian Network for Paediatric Trials), CARIPLO Foundation and Angelini Pharma, outside the submitted work. TAF reports grants and personal fees from Mitsubishi-Tanabe, personal fees from MSD, grants and personal fees from Shionogi, outside the submitted work; in addition, he has a patent 2020-548587 concerning smartphone CBT apps pending, and intellectual properties for Kokoro-app licensed to Mitsubishi-Tanabe. GS has received fees paid to the University of Bern for participating in a meeting as real-world evidence and metaanalysis expert from Biogen and Merck. YV has no competing interests. This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication. This raises the question, has the level of transparency of clinical trials changed specifically for the drugs for which reporting bias has perhaps been best described, namely antidepressants? Since our 2008 publication, several new antidepressant drugs have entered the United States market. Using the earlier study of older antidepressants for comparison, the current study aims to similarly determine whether, and to what degree, the apparent efficacy of the newer drugs has been inflated in published journal articles. More specifically, it asks, does trial outcome (positive or not) still influence whether and how the trial is reported? And does reporting bias still inflate ES? An examination of papers published in all disciplines between 1990 and 2007 suggested an increase in reporting bias over time [ 10 ]. Since then, however, there have been important transparency-related policy changes, such as requirements for registration of clinical trials in 2005 [ 11 , 12 ] and for reporting of the trial results mandated by the Food and Drug Administration Amendments Act (FDAAA) of 2007 [ 13 ], and recent work suggests that the level of transparency has improved [ 14 , 15 ]. Using similar methodology, evidence for reporting bias has also been found among drugs for the treatment of schizophrenia [ 3 ] and of anxiety disorders [ 4 ], although not to the extent observed in antidepressant trials. Nor is reporting bias limited to psychotropic drugs—it has been documented for both pharmacological and nonpharmacological interventions across medical indications [ 5 – 9 ], and it appears to exist in social, biological, and physical sciences, as well [ 10 ]. One of the early studies on reporting bias in antidepressant trials was published by our group [ 2 ]. Examining 12 second-generation antidepressants approved by the Food and Drug Administration (FDA) between 1986 and 2004, we found strong evidence for both study publication bias and outcome reporting bias. Because drug companies must report results of all Phase II/III trials to the FDA in order to gain approval for a new drug, FDA review documents can be considered a gold standard, an unbiased sample of all studies undertaken. Compared to trial results in FDA review documents, results published in journals inflated the apparent efficacy of antidepressants over placebo both in terms of proportion of positive trials and effect size (ES). Reporting bias can lead to overestimates of efficacy and/or underestimates of harms and thus undermine the evidence base regarding drugs and other interventions. Reporting bias takes several forms, including study publication bias and outcome reporting bias [ 1 ]. With study publication bias, entire studies are published or not depending on their results; with outcome reporting bias, studies are published but their outcomes are reported selectively depending on their results. We included only doses approved by the FDA, as reflected in the Dosage and Administration section of the product label. While this wording in this section was clear in many cases, in others, it was ambiguous; thus, for certain doses, arguments could be made for both inclusion and exclusion. We resolved this by conducting a primary MA using broad dose inclusion criteria and a sensitivity MA using narrow or restrictive dose inclusion criteria. For rationale and elaboration, please see legend to Table 2 and Table C in S1 Text . As explained in the latter, for dose reasons, one trial (vilazodone #244) was excluded from both MAs. Because journal-based ES and FDA-based ES are not independent (both derived from the same set of trials), we did not perform a formal statistical comparison through, for instance, meta-regression. As an exploratory method, we did perform multivariate MA, which is capable of handling such dependency, but it is limited in another respect. As explained further in the Supporting information, multivariate MA relies on the correlation between paired FDA-based and journal-based ES values, and complete pairs exist for published trials but not for unpublished trials. Because unpublished trials, compared to published trials, are much more likely to be negative and have systematically smaller ES values [ 2 ], journal-based ES values are missing not at random. Thus, the multivariate approach is less well suited to the examination of study publication bias than to outcome reporting bias. Because our dataset contains both of these forms of reporting bias, results of the multivariate MA are provided as Supporting information ( S1 Text and S8 Fig ). We then compared the results of the MAs, with effect size inflation (ESI), presumably due to reporting bias, calculated as journal-based ES (ES Journal ) minus FDA-based ES (ES FDA ). To facilitate visual comparison of these values, we exported the Stata-generated forest plots to vector-based graphics software (Intaglio version 3.9), which allowed corresponding ES FDA and ES Journal values to be placed alongside one another. Such pairwise forest plots were generated showing ES values at the level of trial, drug, and cohort. We examined whether reporting bias misinformed the public by comparing one meta-analysis (MA) using trial data obtained from FDA reviews to a second MA using data from the corresponding publications. The MAs were conducted using the metan module in Stata 11 [ 25 ], with random effects pooling and the DerSimonian–Laird estimator for heterogeneity. The resulting effect measures (standardized mean difference Hedges’ g ± 95% confidence interval) obtained by author ET were verified against those obtained independently by author YD. As in previous work [ 2 – 4 ], for each multiple-dose trial, we used fixed effects MA to obtain a single trial-level ES; to avoid a spuriously low standard error, each trial’s shared placebo n was counted once rather than redundantly for each dose group. We examined 2 predictors of transparent publication. The first was trial cohort (i.e., older versus newer antidepressants). The second was trial outcome according to the FDA report—positive (study drug clearly statistically superior to placebo on the primary outcome) or not positive. The main model also included a third variable for the interaction between the first 2 predictors. To estimate the associations, within Stata 11 [ 25 ], we employed Firth (penalized) logistic regression using the module firthlogit [ 25 – 27 ]. As a secondary analysis, we employed exact logistic regression [ 26 ] using the module exlogistic [ 25 ]. (These methods were chosen because, in the context of rare events, such as FDA-positive trials that are not transparently published, standard logistic regression fails. Please see S1 Text for elaboration.) We also undertook the following post hoc univariable analyses: Because transparent publication is arguably more likely to occur with positive than negative trials, we examined the effect of cohort within each of these subsets; similarly, we examined the effect of trial outcome within each cohort. We considered a trial to be published transparently if the trial was published in a way that was consistent with the FDA report of that trial. Transparent publication was deemed absent when (a) the trial results were not published (study publication bias) or (b) the results were published but in a way that conflicted with the FDA report (outcome reporting bias). For example, if a trial was reported by the FDA to be negative (nonsignificant on the primary outcome), but the publication conveyed a positive overall result by emphasizing statistically significant results in the beginning of the results section (see above re apparent primary outcome) and the abstract, that trial was deemed not transparently published. Many trials consisted of 2 or more treatment arms compared to a common placebo group, resulting in 2 or more P values. Treatment-arm-level P values reported by the FDA (P FDA ) were compared to P values reported in corresponding journal articles (P Journal ). Because of nonindependence (the same placebo group could be represented in 2 or more datapoints), they were examined descriptively in the form of scatterplots, one for each cohort of trials. The scatterplots necessarily excluded treatment arms whose results were not published (no P Journal values). From the corresponding journal articles, consistent with our previous study of the apparent (to the average clinician-reader) efficacy of antidepressants [ 2 ],we extracted the summary statistics on the apparent primary outcome. This was defined as the drug–placebo comparison highlighted as the trial’s main result by virtue of its being reported first in the article’s results section. As with previous studies [ 2 , 3 ], we employed double data extraction and entry. Data were extracted and entered by 3 teams (ET with 3 assistants; YD; TF and YO), compared using Boolean formulas in Excel, and reconciled for any discrepancies. For each trial, we extracted the results on the primary outcome from the FDA reviews, including the summary statistics and the FDA reviewer’s judgment as to whether the trial was positive, i.e., whether it provided “substantial evidence of effectiveness” for purposes of marketing approval [ 24 ]. For desvenlafaxine and vilazodone, we were unable to identify publications corresponding to all of the FDA-registered trials, so following a method reported previously [ 3 ], we searched for bibliographic information on said trials within recent review articles [ 18 – 23 ], whose authors made use of additional databases, including EMBASE [ 18 , 22 ], ClinicalTrials.gov [ 22 , 23 ], and Cochrane Central [ 18 ]. Additionally, the authors of one article contacted the sponsor [ 22 ], and the authors of another article were employees of the sponsor [ 20 ]. Having identified the inception cohort of premarketing trials registered with the FDA, we used PubMed to search for matching publications reasonably discoverable by clinicians. Example search syntax for one drug was “desvenlafaxine[title] placebo (“major depressive disorder” OR “major depression”).” From the search output, titles and abstracts were screened to include journal articles focused on the overall efficacy of the drug in question for major depressive disorder; thus, we excluded articles focused on other indications, subsets with specific comorbid conditions, particular symptom clusters, safety (as opposed to efficacy), specific demographic samples, trials lacking a parallel design (add-on, open-label, crossover), trials that were not placebo controlled, trials not involving acute treatment (long-term trials, including maintenance trials), and trials involving other routes of administration. A literature search for the FDA-registered trials was also carried out independently by author YD in the context of a separate publication [ 17 ]. Separately, TF and YO identified the trials in ClinicalTrials.gov using the “Other Study ID Numbers” field and identified corresponding publications using the “Publication of Results” field. Matching of FDA-registered trials to publications was confirmed using trial design, duration, drugs used (study drug, placebo, active comparator), and number of participants randomized to each treatment arm. The preferred publication type was a stand-alone article, an article reporting on a single trial, with exceptions allowed as previously described [ 3 ]. Fig 4 compares the overall FDA- versus journal-based ES values for the newer versus the older cohort of antidepressants. As previously reported, the overall ESI for the older cohort was 0.10 (= 0.41 − 0.31), larger than the ESI found in the primary and sensitivity MA for the newer cohort. For additional context, ESI for individual drugs in the older cohort ranged from 0.03 (paroxetine controlled release) to 0.22 (mirtazapine), with a median of 0.10 [ 2 ]. In the sensitivity MAs (right panel of Fig 3 ),which employed restrictive/narrow dose inclusion criteria (see Methods ), ES values were generally higher, especially for the FDA-based values, bringing them into closer alignment with the journal-based values. Thus, the abovementioned ESIs for vilazodone and desvenlafaxine decreased to nearly zero. The overall FDA-based Hedges’ g in these analyses was 0.33 (CI 95% : 0.25, 0.41), while the overall journal-based value was 0.33 (CI 95% : 0.25, 0.41), resulting in an ESI of approximately zero (+0.0). The primary (left panel) and sensitivity (right panel) MAs were based on broad and narrow/restrictive dose inclusion criteria, respectively, as described in text. For each antidepressant, 2 drug-level ES values are shown, one based on clinical trial data from FDA reviews and one based on data from the journal articles. ES, effect size; g, Hedges’s g; FDA, Food and Drug Administration; MA, meta-analysis. The abovementioned drug-level ES values are summarized and compared in Fig 3 . In the primary MA (left panel), as mentioned above, ESI was largest for vilazodone (0.28 − 0.16 = 0.12), followed by desvenlafaxine (0.31 − 0.24 = 0.07). The overall FDA-based Hedges’ g for the 4 newer antidepressants was 0.24 (CI 95% 0.18, 0.30), while the overall journal-based ES was 0.29 (CI 95% 0.23, 0.36), for an ESI of +0.05. S7 Fig is a forest plot comparing trial-level ES based on FDA versus journal data for each of the four newer antidepressants (using broad dose inclusion criteria). In S7 Fig , not-transparently published trials are highlighted for desvenlafaxine and vilazodone, which give rise to the observed (FDA- versus journal-based) ES differences at the level of drug (quantified below). For the other 2 drugs, levomilnacipran and vortioxetine, all trials were deemed transparently published (none highlighted otherwise); thus, their FDA- and journal-based ES values, at the level of both trial and drug, are virtually the same. Dose groups and trials included and excluded in the primary and sensitivity MAs are listed in Table C in S1 Text . Meta-analytic trial-level results from Stata, including forest plots, based on data from the FDA and the published literature, and for both primary and sensitivity MAs, are shown in S3 – S6 Figs and Tables E-H in S1 Text . As shown in Table A in S1 Text , the multivariable model’s interaction effect was nonsignificant (OR 0.19, CI 95% 0.006 to 6.7, P = 0.36); omitting the interaction term had little impact on the multivariable models’ 2 main effects. For all of the abovementioned analyses, similar results were obtained using exact logistic regression ( S2 Fig and Table A in S1 Text ). Post hoc analyses suggested that the higher rate of transparent publication in the newer cohort was limited to negative trials, which increased from 11% to 47%. Negative trials in the newer cohort were 6.6 times more likely to be transparently reported than negative trials in the older cohort (OR 6.6, CI 95% 1.6, 26.4, P = 0.008), equal to the abovementioned main effect of cohort. By contrast, positive trials were transparently reported approximately 100% of the time for both cohorts; thus, the post hoc univariable analysis showed no effect (OR 1.3, CI 95% 0.05 to 33, P = 0.88). With respect to the variable for cohort, the overall proportion of transparently reported trials increased from 54% to 73%. Controlling for trial outcome, trials in the newer cohort were 6.6 times more likely to be transparently reported than trials in the older cohort (OR 6.6, CI 95% 1.6 to 26.4, P = 0.008). For all trials regardless of outcome (dashed oblique line), the proportion of transparently reported trials increased from 54% (older drugs) to 73% (newer drugs). Within the subset of FDA-positive trials (blue line), transparent reporting, which was already nearly 100% for the older cohort, showed no further increase. By contrast, within FDA-negative trials (green line), transparent reporting increased from 11% to 47%. FDA, Food and Drug Administration. The effects of trial outcome and cohort on transparent reporting were examined using logistic regression. Please refer to Fig 2 for all counts and proportions, as well as odds ratios. (Further logistic regression results are available in Table A in S1 Text .) With respect to the variable for trial outcome, transparent reporting occurred more often for FDA-positive than for FDA-negative trials (OR 181, CI 95% 26.9 to 1,219, P < 0.001). Post hoc univariable analyses showed significant effects of trial outcome within the older cohort (OR 181 CI 95% 26.9 to 1,219, P < 0.001), consistent with findings reported earlier [ 2 ], and within the newer cohort (OR 35.1, CI 95% 1.8 to 693, P = 0.019). Within the newer cohort, 15 of 15 (100%) positive trials were reported transparently versus 7 of 15 (47%) negative trials. Two other FDA-negative trials—desvenlafaxine EU trial 309 and US trial 317—were published but classified as not transparently published for 2 reasons. First, they were published solely in the form of a single positive “pooled analysis” paper [ 27 ]. (This form of reporting bias has been previously described in the antidepressant literature [ 28 ].) To be classified as transparently reported, the 2 trials should have been published in separate stand-alone papers highlighting their nonsignificant results, or published in a combined article highlighting the 2 nonsignificant results. Second, in the journal article, a nonprimary method of handling dropouts (MMRM instead of LOCF; Table D in S1 Text ) was used, leading to statistically significant pooled results. (Although pooling trials increases statistical power, this alone would not have yielded a statistically significant result. Via post hoc MA of the FDA-reported primary results for these 2 trials, we calculated Hedges’ g = 0.10 (CI 95% −0.08 to 0.28, P = 0.27).) These significant results were highlighted in the abstract and beginning of the results section. Meanwhile, the nonsignificant results from the individual trials were reported beginning on the fifth page of the results section and not in the abstract. Six of these were simply not published. One desvenlafaxine trial was referred to in one review publication as “an unpublished report with the code name Des 223” [ 18 ]; in a second review publication authored by employees of the sponsor, it was referred as “data on file” [ 20 ]. Regarding vilazodone, one review publication referred to 5 trials (#244 to 248) as “astonishingly unfavorable” and cited the FDA drug approval package and no publications; a second review publication listed them in a paragraph and a data table devoted to unpublished trials. For the 4 newer antidepressants, Table 2 and S1 Fig show each trial’s overall outcome, as determined by the FDA, and its corresponding publication status. Of these 30 trials, the FDA deemed 15 (50%) to be positive, i.e., statistically significant on the prespecified primary outcome, consistent with the proportion previously reported for the older cohort [ 2 ]. Among these 15 FDA-positive trials, all were published in agreement with the FDA (transparently reported as positive). Among the 15 not-positive trials, 7 (47%) were transparently published (as nonsignificant), a higher proportion than that observed for the older cohort (4/37 = 11%) [ 2 ]. The remaining 8 (53%) FDA-negative trials in the newer cohort were not transparently published. P values reported by the FDA (P FDA , horizontal axes) are compared to those in corresponding journal articles (P Journal , vertical axes). The older and newer antidepressants are shown in the left and right plots, respectively. Each data point represents a drug treatment arm compared to placebo, with area proportional to the sum of their sample sizes. Dashed diagonal represents concordance between P FDA and P Journal , i.e., an absence of reporting bias. Cases of outcome reporting bias, where P FDA is NS but P Journal is reported as significant, are highlighted in yellow. (The 2 yellow circles far off the Y = X diagonal in the right-hand panel represent desvenlafaxine trials 309 and 317 (please see text).) Unpublished treatments arms, with P FDA values but no corresponding P Journal values, are shown in the gray boxes and highlighted in yellow. FDA, Food and Drug Administration; NS, not significant. The total number of treatment arms was 149, with 101 and 48 in the older and newer cohorts, respectively. Fig 1 plots the P values of all these arms against placebo as reported in the journals versus as confirmed in FDA reviews: It shows, among the 104 published treatment arms, a greater proportion of treatment arms lying along the Y = X (P Journal = P FDA ) diagonal in the newer cohort, i.e., greater concordance between journal- and FDA-based data. The proportion of unpublished treatment arms (gray boxes) was 36% (36/101, CI 95% 26% to 45%) for the older cohort versus 19% (9/48, CI 95% 8% to 30%) for the newer cohort. The FDA-registered trials and their corresponding publications are listed in Table 2 . For the cohort of 4 newer antidepressants, there were 30 applicable trials with 13,747 participants, while for the cohort of older antidepressants, there were 74 applicable trials with 12,564 participants. Median trial sample sizes for the newer and older cohorts were 439.5 and 147.5, respectively (Z = 6.72, P < 0.001 by Wilcoxon rank-sum test), different by a factor of 3. Discussion The data presented here suggest that reporting bias in the published literature on antidepressant drugs is still an important issue. Even within the cohort of newer antidepressants, statistical significance still has an undue influence on whether and how these trials are reported. Consistent with earlier work, we found that positive trials are much more likely to be transparently reported than negative trials, whether one looks within the newer cohort or at both cohorts combined. However, we also found evidence for improvement in reporting bias: Antidepressants trials, especially those deemed negative by the FDA, are more likely to be published transparently than they were previously. Regarding the meta-analytic results, though not the subject of formal statistical analysis, smaller ESI values for the newer, compared to the older, cohort also could be consistent with a decrease in reporting bias. Comparison with previous findings Our findings are consistent with at least 5 other recent studies: (1) A study [29] of Phase III randomized controlled trials (RCTs) in pediatric patients compared conference abstracts from 2008 to 2011 to subsequent publications and found evidence for “reduced but ongoing publication bias” as compared to a similar study from 15 years earlier [30]. (2) Another research group found evidence for improvements in some, though not all, measures of transparency (registration rates, results reporting, publication rates) for drugs approved in 2014, compared to drugs approved in 2012 [31]. (3) In an examination of trials of both pharmacological and nonpharmacological treatments for depression, the prevalence of proper registration and reporting was improved but still very low, despite the fact that registration and reporting had been mandatory for several years [32]. (4) Examining drugs approved for cardiovascular disease and diabetes mellitus [14], another group found a decrease in publication bias (as well as an increase in registration) among trials for drugs approved by the FDA after, compared to before, the FDAAA of 2007. (5) The same group applied similar methodology to drugs approved for several indications treated by neurologists and psychiatrists, as well as other indications (anesthesia, constipation, fibromyalgia, pain) [33]. The latter 2 studies were restricted to “pivotal” trials, a designation often assigned post hoc to trials with positive outcomes; by contrast, the current study covers all efficacy trials regardless of outcome, including so-called “failed trials” [34,35]. The current study differs from the 5 abovementioned studies in that it focuses on one drug class for one indication (major depressive disorder), thus enabling MA. Possible explanations How might we explain this apparent increase in transparency? There have long been many incentives to engage in reporting bias [36]. In the past, there was little awareness within the research and clinical communities that the problem existed, and pharmaceutical companies (and others) could engage in reporting bias without fear of detection. Since then, however, there has been a cultural change, and what was once standard practice is no longer considered acceptable. Numerous policy changes have been implemented, summarized elsewhere [37]. ClinicalTrials.gov was launched in 2000, but registrations initially lagged. In 2004—the year the FDA approved duloxetine, the newest drug within the older cohort of antidepressants [2]—the International Committee of Journal Editors (ICMJE) announced that prospective registration would be a precondition for publication. The following year saw a 73% increase in the registration rate over a span of just 5 months [38]. In 2005, the WHO International Clinical Trials Registry Platform (http://apps.who.int/trialsearch/Default.aspx) was launched. In 2007, the FDAAA was enacted [13], which legally mandated public registration of applicable clinical trials and called for the augmentation of ClinicalTrials.gov with a basic results database; in 2010, FDAAA was clarified and expanded in scope to include all Phase II to IV drug and device trials, adverse events, and basic results [39]. It seems reasonable to conclude that these policy changes played a major role in bringing about the increase in transparency suggested by the current study and the others mentioned above. However, given the level of attention directed toward reporting bias with antidepressants, in the form of lawsuits [39], numerous key publications [2,40–43], and new incentives to increase transparency, for instance, the Good Pharma Scorecard [31], it is possible that substantial improvement would have occurred without these policy changes. Implications, theoretical and practical However, we must caution that, while the proverbial glass of transparency is now half full, it also remains half empty. Nothing less than full transparency should be considered acceptable in the realm of healthcare. Greater awareness of reporting bias is needed among researchers and clinicians so that they do not naively accept published research findings at face value. The abovementioned policy changes should not be celebrated until compliance with them improves. In the case of FDAAA, apparently due to a lack of political will, enforcement has been lax, leading to over $5 billion in accrued fines remaining uncollected (http://fdaaa.trialstracker.net). Additionally, many journals that ostensibly support the ICMJE policy of preregistration continue to publish a substantial number of unregistered or belatedly registered trials [32]. Perhaps what is needed most is to eliminate reporting bias at its root. FDA reviews include the results of negative, as well as positive, studies, because the Agency receives study protocols before studies are undertaken, thus preventing drug companies from hiding the existence of studies or switching their outcomes. Although trial registries are intended to serve a similar purpose, they are separate from journals, in which the strength and direction of study results can continue to dictate submission and acceptance decisions. However, in an emerging peer review model known as Registered Reports [44], manuscripts are submitted and reviewed before studies are undertaken, leading to preliminary publication decisions based solely on the scientific question and methodological rigor. Registered Reports has been adopted, or offered as an option, by >300 journals in various fields (https://cos.io/rr/?_ga=2.192240618.1714708995.1570198509-367521697.1570198509, accessed November 20, 2021), but uptake among major medical journals has unfortunately lagged. [END] [1] Url: https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1003886 (C) Plos One. "Accelerating the publication of peer-reviewed science." Licensed under Creative Commons Attribution (CC BY 4.0) URL: https://creativecommons.org/licenses/by/4.0/ via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/plosone/