The evaluation of tools used to predict the impact of missense variants is hindered by two types of circularity
Prioritising missense variants for further experimental investigation is a key challenge in current sequencing studies for exploring complex diseases. A large number of in silico tools have been employed for this task of pathogenicity prediction, including PolyPhen-2, SIFT, FatHMM, MutationTaster-2, MutationAssessor, CADD, LRT, phyloP and GERP++. Due to the wealth of these methods, an important practical question to answer is which of these tools generalize best, that is, correctly predict the pathogenic character of new variants.
We here demonstrate in a study of 10 tools on five datasets that such a comparative evaluation of pathogenicity prediction tools is hindered by two types of circularity: they arise due to (1) the same variants or (2) different variants from the same protein occurring both in the datasets used for training and for evaluation of these tools, which may lead to overly optimistic results. We show that comparative evaluations of predictors that do not address these types of circularity may erroneously conclude that circularity-confounded tools are most accurate among all tools, and may even outperform optimized combinations of tools, such as Condel and Logit.
Code and Data can be downloaded at my GitHub repository: Python Code & Data
- The evaluation of tools used to predict the impact of missense variants is hindered by two types of circularity.
DG Grimm, CA Azencott, F Aicheler, U Gieraths, DG MacArthur, KE Samocha, DN Cooper, PD Stenson, MJ Daly, JW Smoller, LE Duncan and KM Borgwardt
Human Mutation 2015, 36(5):513-523