Search for a command to run...
Community-acquired pneumonia (CAP) is a prevalent infectious disease that requires integration of pathogen identification, imaging findings, clinical assessment, and treatment strategies. Traditional lecture-based teaching may result in fragmented knowledge acquisition and limited development of systematic clinical reasoning. Knowledge graphs derived from multiple clinical guidelines offer a structured way to organize diagnostic and therapeutic information and may support knowledge integration when embedded within interactive learning environments. However, evidence regarding their educational application in CAP teaching remains limited. This study developed a multi-guideline knowledge graph–supported instructional approach and evaluated its association with medical students’ learning outcomes and clinical reasoning performance. A knowledge graph was constructed based on the 2019 IDSA/ATS guidelines, the 2023 Chinese consensus on elderly CAP, European severe CAP guidelines, and the 2024 emergency adult CAP guidelines. Key entities and semantic relationships—including pathogens, clinical scoring tools (CURB-65, PSI), and treatment stratification—were extracted using natural language processing techniques. One hundred medical undergraduates were randomized to either an intervention group (n = 50), which received an interactive knowledge graph–supported platform combined with case simulations, or a control group (n = 50), which received traditional lectures and case discussions. Outcomes included theoretical knowledge tests, case analysis scores, clinical skill assessments, and questionnaires evaluating clinical reasoning and learner perceptions. The intervention group significantly outperformed the control group across all outcome domains. In theoretical assessments, students in the intervention group achieved higher total scores (87.6 ± 5.2 vs. 76.3 ± 6.8; p < 0.001), with substantial gains in MCQ (92.4 ± 4.1 vs. 81.5 ± 7.3) and MAQ accuracy (85.7 ± 6.8 vs. 63.2 ± 9.5). Case analysis scores were markedly higher (89.2 ± 5.6 vs. 70.8 ± 8.4), and conflict resolution accuracy improved significantly (82.4 ± 7.1 vs. 58.7 ± 10.2). Subdomain analysis showed large effect sizes in pathogen identification (Cohen’s d = 1.98), scoring tool application (d = 2.14), and antibiotic selection (d = 1.85). Clinical skill assessments further demonstrated greater diagnostic accuracy (89.2% vs. 72.5%), improved empirical antibiotic choices (91.6% vs. 68.3%), and shorter decision-making time (8.2 ± 1.3 vs. 12.7 ± 2.1 min; p = 0.007). Post-intervention surveys showed significantly higher learner satisfaction in knowledge integration (4.5 ± 0.3 vs. 3.2 ± 0.4), decision confidence (4.3 ± 0.5 vs. 3.1 ± 0.6), and guideline comprehension (4.6 ± 0.4 vs. 2.9 ± 0.5). Qualitative interviews confirmed perceived benefits in visualization, clinical reasoning enhancement, and suggested further case complexity as a refinement. An interactive instructional approach integrating a multi-guideline knowledge graph was associated with higher learning outcomes, improved clinical reasoning performance, and greater learner satisfaction in CAP education. The observed associations likely reflect the combined effects of structured knowledge visualization and active, self-paced, case-based learning. Because instructional modality differed between groups, these findings should not be interpreted as evidence of a causal effect attributable solely to the knowledge graph structure. Future studies employing modality-matched controls are needed to clarify the independent contribution of knowledge graph structures to educational outcomes