Search for a command to run...
Auscultation remains a cornerstone of clinical practice, essential for both initial evaluation and continuous monitoring. Clinicians listen to the lung sounds and make a diagnosis by combining the patient's medical history and test results. Given this strong association, multitask learning (MTL) can offer a compelling framework to simultaneously model these relationships, integrating respiratory sound patterns with disease manifestations. While MTL has shown considerable promise in medical applications, a significant research gap remains in understanding the complex interplay between respiratory sounds, disease manifestations, and patient metadata attributes. This study investigates how integrating MTL with cutting-edge deep learning architectures can enhance both respiratory sound classification and disease diagnosis. Specifically, we extend recent findings regarding the beneficial impact of metadata on respiratory sound classification by evaluating its effectiveness within an MTL framework. Our comprehensive experiments reveal significant improvements in both lung sound classification and diagnostic performance when the stethoscope information is incorporated into the MTL architecture.Clinical relevance Our integrated MTL approach has immediate clinical applications in supporting medical professionals' diagnostic decisions, including lung sound classification to aid in detecting respiratory disorders, potentially reducing misdiagnosis rates and improving patient outcomes in respiratory care settings (85.83% and 78.86% specificity, along with 94.09% and 41.56% sensitivity, for disease diagnosis and lung sound classification, respectively.).