TITLE OF THE PAPER
LIST OF AUTHORS
DATE
DESCRIPTION OF THE PAPER
Github repository
Explaining Local Discrepancies between Image Classification Models
Laugel T., Renard X., Detyniecki M.2022
XAI4CV workshop (CVPR 2022)
Understanding surrogate explanations: the interplay between complexity, fidelity and coverage
Poyiadzi R., Renard X., Laugel T., Santos-Rodriguez R., Detyniecki M.2021
arxiv preprint
How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice
Vermeire T., Laugel T., Renard X., Martens D., Detyniecki M.2021
ECML PKDD International Workshop on eXplainable Knowledge Discovery in Data Mining (ECML XKDD 2021)
On the overlooked issue of defining explanation objectives for local-surrogate explainers
Poyiadzi R., Renard X., Laugel T., Santos-Rodriguez R., Detyniecki M.2021
International Conference on Machine Learning (ICML) Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI
Understanding Prediction Discrepancies in Machine Learning Classifiers
Renard X., Laugel T., Detyniecki M.2021
arxiv preprint
Sentence-Based Model Agnostic NLP Interpretability
Rychener Y., Renard X., Seddah D., Frossard P., Detyniecki M.
2020
Github repository
QUACKIE: A NLP Classification Task With Ground Truth Explanations
Rychener Y., Renard X., Seddah D., Frossard P., Detyniecki M.
2020
Github repository
Benchmark's website of NLP interpretability methods
Local Post-hoc Interpretability for Black-box Classifiers
Laugel, T.2020
Ph.D. Thesis
Imperceptible Adversarial Attacks on Tabular Data
Ballet V., Renard X., Aigrain J., Laugel T., Frossard P., Detyniecki M.2019
NeurIPS 2019 Workshop on Robust AI in Financial Services: Data, Fairness, Explainability, Trustworthiness, and Privacy (Robust AI in FS 2019)
Concept Tree: High-Level Representation of Variables for More Interpretable Surrogate Decision Trees
Renard X., Woloszko N., Aigrain J., Detyniecki M.2019
Bank of England and King's College London joint conference on Modelling with Big Data and Machine Learning: Interpretability and Model Uncertainty
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations
Laugel, T., Lesot, M. J., Marsala, C., Renard, X., & Detyniecki, M.2019
International Joint Conference on Artificial Intelligence (IJCAI)
Github repository
Unjustified Classification Regions and Counterfactual Explanations In Machine Learning
Laugel, T., Lesot, M. J., Marsala, C., Renard, X., & Detyniecki, M.2019
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD)
Concept Tree: High-Level Representation of Variables for More Interpretable Surrogate Decision Trees
Renard X., Woloszko N., Aigrain J., Detyniecki M.2019
International Conference on Machine Learning (ICML) Workshop on Human In the Loop Learning (HILL)
Issues with post-hoc counterfactual explanations: a discussion
Laugel, T., Lesot, M.-J., Marsala, C., Detyniecki, M.2019
International Conference on Machine Learning (ICML) Workshop on Human In the Loop Learning (HILL)
Detecting Potential Local Adversarial Examples for Human-Interpretable Defense.
Renard X., Laugel T., Lesot MJ., Marsala C., Detyniecki M.2018
European Conference on Machine Learning (ECML/PKDD) Workshop on Recent Advances in Adversarial Machine Learning (Nemesis)
Defining Locality for Surrogates in Post-hoc Interpretablity.
Laugel T., Renard X., Lesot MJ., Marsala C., Detyniecki M.2018
International Conference on Machine Learning (ICML) Workshop on Human Interpretability in Machine Learning (WHI 2018)
Comparison-based inverse classification for interpretability in machine learning
Laugel, T., Lesot, M. J., Marsala, C., Renard, X., & Detyniecki, M.2018
International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU)
Github repository
Inverse Classification for Comparison-based Interpretability in Machine Learning
Laugel, T., Lesot, M. J., Marsala, C., Renard, X., & Detyniecki, M.2017
arXiv preprint arXiv:1712.08443