More trustworthy deep learning medical imaging models will be needed as artificial intelligence (AI) is widely used for crucial tasks like diagnosing and treating diseases, necessitating predictions and outcomes about medical care that practitioners and patients can rely on.
The objective of a new deep learning method put forth by a group of computer scientists is to increase the accuracy and interpretability of classifier models created for identifying disease types from diagnostic images without compromising reliability. The method uses a concept known as confidence calibration, which methodically modifies the model’s predictions to correspond to the expectations of a human expert in the actual world.
The researchers developed the “reliability plot,” which incorporates experts in the inference loop to reveal the trade-off between model autonomy and accuracy because it is difficult to quantify the reliability of machine-learned models in practice. A model’s ability to postpone making predictions when its confidence is low allows a comprehensive assessment of its dependability.
The authors looked at dermoscopy images of skin lesions for skin cancer screening. Each image represents a different disease condition, including melanoma, melanocytic nevus, basal cell carcinoma, actinic keratosis, benign keratosis, dermatofibroma, and vascular lesions. The researchers demonstrated that when compared to current deep learning approaches, calibration-driven learning results in more accurate and dependable detectors. In contrast to standard neural networks’ 74 percent accuracy on this difficult measure, they attained an impressive 80 percent accuracy.
Related Content: Point-Scanning Imaging With Deep Learning