Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Detection of heart murmurs from stethoscope sounds is a key clinical technique used to identify cardiac abnormalities. We describe the creation of an ensemble classifier using both deep and hand-crafted features to screen for heart murmurs and clinical abnormality from phonocar-diogram recordings over multiple auscultation locations. The model was created by the team Murmur Mia! for the George B. Moody PhysioNet Challenge 2022. Methods: Recordings were first filtered through a gradient boosting algorithm to detect Unknown. We assume that these are related to poor quality recordings, and hence we use input features commonly used to assess audio quality. Two further models, a gradient boosting model and ensemble of convolutional neural networks, were trained using time-frequency features and the mel-frequency cepstral coefficients (MFCC) as inputs, respectively. The models were combined using logistic regression, with bespoke rules to convert individual recording outputs to patient predictions. Results: On the hidden challenge test set, our classifier scored 0.755 for the weighted accuracy and 14228 for clinical outcome challenge metric. This placed 9/40 and 28/39 on the challenge leaderboard, for each scoring metric, respectively.

Original publication




Conference paper

Publication Date