Scheda di dettaglio – i prodotti della ricerca

DatoValore
TitleExplainable artificial intelligence enhances the ecological interpretability of black-box species distribution models
AbstractSpecies distribution models (SDMs) are widely used in ecology, biogeography and conservation biology to estimate relationships between environmental variables and species occurrence data and make predictions of how their distributions vary in space and time. During the past two decades, the field has increasingly made use of machine learning approaches for constructing and validating SDMs. Model accuracy has steadily increased as a result, but the interpretability of the fitted models, for example the relative importance of predictor variables or their causal effects on focal species, has not always kept pace. Here we draw attention to an emerging subdiscipline of artificial intelligence, explainable AI (xAI), as a toolbox for better interpreting SDMs. xAI aims at deciphering the behavior of complex statistical or machine learning models (e.g. neural networks, random forests, boosted regression trees), and can produce more transparent and understandable SDM predictions. We describe the rationale behind xAI and provide a list of tools that can be used to help ecological modelers better understand complex model behavior at different scales. As an example, we perform a reproducible SDM analysis in R on the African elephant and showcase some xAI tools such as local interpretable model-agnostic explanation (LIME) to help interpret local-scale behavior of the model. We conclude with what we see as the benefits and caveats of these techniques and advocate for their use to improve the interpretability of machine learning SDMs.
SourceEcography (Cop.) 44 (2), pp. 199–205
Keywordsecological modelingexplainable artificial intelligencehabitat suitability modelinginterpretable machine learningspecies distribution modelxAI
JournalEcography (Cop.)
EditorBlackwell, Oxford, Regno Unito
Year2021
TypeArticolo in rivista
DOI10.1111/ecog.05360
AuthorsRyo, Masahiro; Angelov, Boyan; Mammola, Stefano; Kass, Jamie M.; Benito, Blas M.; Hartig, Florian
Text438536 2021 10.1111/ecog.05360 ISI Web of Science WOS 000589919100001 ecological modeling explainable artificial intelligence habitat suitability modeling interpretable machine learning species distribution model xAI Explainable artificial intelligence enhances the ecological interpretability of black box species distribution models Ryo, Masahiro; Angelov, Boyan; Mammola, Stefano; Kass, Jamie M.; Benito, Blas M.; Hartig, Florian Free Univ Berlin; Berlin Brandenburg Inst Adv Biodivers Res BBIB; Leibniz Ctr Agr Landscape Res ZALF; Assoc Comp Machinery ACM; Natl Res Council CNR; Univ Helsinki; Okinawa Inst Sci Technol Grad Univ; Univ Alicante; Univ Regensburg Species distribution models SDMs are widely used in ecology, biogeography and conservation biology to estimate relationships between environmental variables and species occurrence data and make predictions of how their distributions vary in space and time. During the past two decades, the field has increasingly made use of machine learning approaches for constructing and validating SDMs. Model accuracy has steadily increased as a result, but the interpretability of the fitted models, for example the relative importance of predictor variables or their causal effects on focal species, has not always kept pace. Here we draw attention to an emerging subdiscipline of artificial intelligence, explainable AI xAI , as a toolbox for better interpreting SDMs. xAI aims at deciphering the behavior of complex statistical or machine learning models e.g. neural networks, random forests, boosted regression trees , and can produce more transparent and understandable SDM predictions. We describe the rationale behind xAI and provide a list of tools that can be used to help ecological modelers better understand complex model behavior at different scales. As an example, we perform a reproducible SDM analysis in R on the African elephant and showcase some xAI tools such as local interpretable model agnostic explanation LIME to help interpret local scale behavior of the model. We conclude with what we see as the benefits and caveats of these techniques and advocate for their use to improve the interpretability of machine learning SDMs. 44 Published version Ryo et al 2020 2021_RYO ET AL Ecography.pdf Articolo in rivista Blackwell 0906 7590 Ecography Cop. Ecography Cop. Ecography Cop. Ecography. Cop. stefano.mammola MAMMOLA STEFANO