Search for a command to run...
Systematic reviews are central to evidence-based medicine, providing synthesized insights from available scientific data. However, their reliability depends on the quality and completeness of underlying evidence. This thesis examined several aspects of evidence synthesis to enhance the reliability of these methods, with a focus on methodological guidance and its practical application. Chapter 2 compared two approaches for identifying randomised controlled trials (RCTs) for systematic reviews—clinical trials registries and medical journal databases. Registries identified more completed RCTs, while both methods missed some studies, highlighting the need to search both sources. Improving result reporting and retrieval systems in registries could make them more viable for primary use. Chapter 3 evaluated the global state of results reporting in clinical trials registries. Despite mandates promoting transparency, only a minority of trial records had results available. Barriers to reporting included time constraints, publication concerns, and lack of incentives; while barriers to using registry data included mistrust and usability issues. Recommendations for improvement involved better infrastructure, policies, training, and funding. With adequate support, registries could significantly enhance research transparency and evidence dissemination. Chapter 4 addressed prediction models for COVID-19 in low- and lower middle-income countries (LMICs), where existing models underperformed. Using WHO Global Clinical Platform data, two models were developed to predict mortality and ICU admission risk. Though the models performed well, performance varied across countries, and some miscalibration was observed. External validation is strongly recommended before clinical implementation. Chapter 5 provided a comprehensive overview of 14 methodological quality assessment tools for diagnosis and prognosis studies. It outlined their similarities and differences and offered five guiding questions to help researchers, policy makers, and systematic reviewers choose the most appropriate tool for their needs. Chapter 6 examined the inter-rater reliability of the Prediction model Risk Of Bias ASsessment Tool (PROBAST), which often rates studies as high risk of bias. Although overall reliability was high, variability existed at the item and domain level. Standardizing judgments and conducting structured consensus meetings are advised to reduce discrepancies. Chapter 7 introduced PROBAST+AI, an updated tool to assess risk of bias and applicability in studies using statistical or artificial intelligence (AI)/ machine learning methods. This version addresses fairness and algorithmic bias, replacing the original PROBAST to better evaluate modern prediction models in healthcare. Chapter 8 updated the TRIPOD adherence tool to TRIPOD+AI, aligning it with advances in AI-based modeling. The updated tool enables standardized scoring and monitoring of reporting quality, promoting consistency and transparency in prediction model reporting. In conclusion, this thesis identifies key challenges in evidence synthesis and proposes actionable solutions through improved registration, reporting, and assessment tools —making healthcare decisions more reliable and transparent.
DOI: 10.33540/3059