Giter VIP home page Giter VIP logo

xai-group's Introduction

xai-group

Livros

  • Interpretable Machine Learning. https://christophm.github.io/interpretable-ml-book/
  • Belle, V.; and Papantonis, I. 2021. Principles and practice of explainable machine learning. Frontiers in big Data, 39.
  • A. Barredo Arrieta, N. D ́ıaz-Rodr ́ıguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, and F. Herrera, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, vol. 58, pp. 82–115, 2020.

Papers

Survey

  • A. Adadi and M. Berrada, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” IEEE Access, vol. 6, pp.52 138–52 160, 2018.
  • Vilone, G.; and Longo, L. 2021. Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion, 76: 89–106.
  • Vitali, F. 2022. A Survey on Methods and Metrics for the Assessment of Explainability under the Proposed AI Act. In Legal Knowledge and Information Systems, volume 346, 235. IOS Press.
  • G. Alicioglu and B. Sun, “A survey of visual analytics for Explainable Artificial Intelligence methods,” Computers Graphics, vol. 102, pp. 502–520, 2022.

Evaluation

  • Amparore, E.; Perotti, A.; and Bajardi, P. 2021. To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods. PeerJ Computer Science, 7: e479.
  • Agarwal, C.; Saxena, E.; Krishna, S.; Pawelczyk, M.; Johnson, N.; Puri, I.; Zitnik, M.; and Lakkaraju, H. 2022b. OpenXAI: Towards a Transparent Evaluation of Model Explanations. arXiv:2206.11104.
  • Hase, P.; and Bansal, M. 2020. Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? In Annual Meeting of the Assoc. for Comput. Linguistics, 5540–5552.

Models

  • Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016b. “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135–1144. San Francisco, CA, USA: NY ACM.
  • Explaining prediction models and individual predictions with feature contributions (KernelSHAP paper)
  • Tan, S.; Hooker, G.; Koch, P.; Gordo, A.; and Caruana, R. 2023. Considerations when learning additive explanations for black-box models. Machine Learning, 1–27.
  • Agarwal, C.; Johnson, N.; Pawelczyk, M.; Krishna, S.; Saxena, E.; Zitnik, M.; and Lakkaraju, H. 2022a. Rethinking stability for attribution-based explanations. arXiv:2203.06877.
  • Casalicchio, G.; Molnar, C.; and Bischl, B. 2018. Visualizing the feature importance for black box models. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 655–670. Springer.
  • Chen, H.; Lundberg, S.; and Lee, S.-I. 2021. Explaining models by propagating Shapley values of local components. In Explainable AI in Healthcare and Medicine, 261–270. New York City, NY, USA: Springer.
  • Chen, H.; Lundberg, S. M.; and Lee, S.-I. 2022. Explaining a series of models by propagating Shapley values. Nature communications, 13(1): 1–15.
  • Dai, J.; Upadhyay, S.; Aivodji, U.; Bach, S. H.; and Lakkaraju, H. 2022. Fairness via explanation quality: Evaluating disparities in the quality of post hoc explanations. In AAAI/ACM Conference on AI, Ethics, and Society, 203–214.
  • Duell, J.; Fan, X.; Burnett, B.; Aarts, G.; and Zhou, S.-M. 2021. A comparison of explanations given by explainable artificial intelligence methods on analysing electronic health records. In IEEE EMBS Int. Conf. on Biomedical and Health Informatics, 1–4. IEEE.
  • Gramegna, A.; and Giudici, P. 2021. SHAP and LIME: an evaluation of discriminative power in credit risk. Frontiers in Artificial Intelligence, 4: 752558.
  • Kuzlu, M.; Cali, U.; Sharma, V.; and Guler, ̈ O. 2020. Gaining insight into solar photovoltaic power generation forecasting utilizing explainable artificial intelligence tools. IEEE Access, 8: 187814–187823.
  • Lapuschkin, S.; Waldchen, S.; Binder, A.; Montavon, G.; Samek, W.; and Muller, K.-R. 2019. Unmasking Clever Hans predictors and assessing what machines really learn. Nature communications, 10(1): 1–8.
  • Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016a. Nothing else matters: model-agnostic explanations by identifying prediction invariance. arXiv:1611.05817.
  • C. Frye, D. de Mijolla, T. Begley, L. Cowton, M. Stanley, and I. Feige, “Shapley explainability on the data manifold,” in International Conference on Learning Representations, 2021, pp. 1–14. [Online]. Available: https://openreview.net/forum?id=OPyWRrcjVQw
  • P. Joia, D. Coimbra, J. A. Cuminato, F. V. Paulovich, and L. G. Nonato, “Local Affine Multidimensional Projection,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 2563–2571, 2011.
  • P. Xenopoulos, G. Chan, H. Doraiswamy, L. G. Nonato, B. Barr, and C. Silva, “GALE: Globally Assessing Local Explanations,” in Proc. of Topological, Algebraic, and Geometric Learning Workshops, ser. Proceedings of Machine Learning Research, A. Cloninger, T. Doster, T. Emerson, M. Kaul, I. Ktena, H. Kvinge, N. Miolane, B. Rieck, S. Tymochko, and G. Wolf, Eds., vol. 196. PMLR, 2022, pp. 322–331.
  • D. Collaris and J. J. van Wijk, “ExplainExplore: Visual Exploration of Machine Learning Explanations,” in IEEE Pacific Visualization Symposium (PacificVis), 2020, pp. 26–35.
  • G. Y.-Y. Chan, E. Bertini, L. G. Nonato, B. Barr, and C. T. Silva, “Melody: generating and visualizing machine learning model summary to understand data and classifiers together,” arXiv:2007.10614, 2020.
  • J. Yuan, G. Y.-Y. Chan, B. Barr, K. Overton, K. Rees, L. G. Nonato, E. Bertini, and C. T. Silva, “SUBPLEX: A Visual Analytics Approach to Understand Local Model Explanations at the Subpopulation Level,”IEEE Computer Graphics and Applications, vol. 42, no. 6, pp. 24–36, 2022.
  • F. Cheng, Y. Ming, and H. Qu, “DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, pp. 1438–1447, 2021.
  • T. Spinner, U. Schlegel, H. Schafer, and M. El-Assady, “explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, pp. 1064–1074, 2020.
  • T. Han, S. Srinivas, and H. Lakkaraju, “Which explanation should i choose? a function approximation perspective to characterizing post hoc explanations,” Advances in Neural Information Processing Systems, vol. 35, pp. 5256–5268, 2022.
  • Zafar, M. R.; and Khan, N. 2021. Deterministic local interpretable model-agnostic explanations for stable explainability. Machine Learning and Knowledge Extraction, 3(3): 525–541.
  • Yang, M.; and Kim, B. 2019. Benchmarking attribution methods with relative feature importance. arXiv:1907.09701.
  • A. Schwarzschild, M. Cembalest, K. Rao, K. Hines, and J. Dickerson,“Reckoning with the disagreement problem: Explanation consensus as a training objective,” in AAAI/ACM Conference on AI, Ethics, and Society, ser. AIES ’23. New York, NY, USA: Association for Computing Machinery, 2023, p. 662–678. [Online]. Available: https://doi.org/10.1145/3600211.3604687

Computer vision

  • Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; Muller, K.-R.; and Samek, W. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7): e0130140.
  • Hamilton, M.; Lundberg, S.; Zhang, L.; Fu, S.; and Freeman, W. T. 2021. Model-agnostic explainability for visual search. arXiv:2103.00370.
  • A. Mahendran and A. Vedaldi, “Salient deconvolutional networks,” in European Conference Computer Vision (ECCV), B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Springer International Publishing, 2016, pp. 120–135.
  • J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. A. Riedmiller, “Striving for simplicity: The all convolutional net,” in International Conference on Learning Representations, ICLR, Y. Bengio and Y. LeCun, Eds., 2015. [Online]. Available: http://arxiv.org/abs/1412.6806
  • K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps,” in International Conference on Learning Representations, ICLR, Y. Bengio and Y. LeCun, Eds., 2014. [Online]. Available: http://arxiv.org/abs/1312.6034
  • J. Wang, L. Gou, W. Zhang, H. Yang, and H.-W. Shen, “DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 6, pp. 2168–2180, 2019.
  • S. Sattarzadeh, M. Sudhakar, and K. N. Plataniotis, “SVEA: A Small scale Benchmark for Validating the Usability of Post-hoc Explainable AI Solutions in Image and Signal Recognition,” in IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021, pp. 4141–4150.
  • P.-J. Kindermans, S. Hooker, J. Adebayo, M. Alber, K. T. Schutt, S. Dahne, D. Erhan, and B. Kim, ̈ The (Un)reliability of Saliency Methods. Cham: Springer International Publishing, 2019, pp. 267–280.

NLP

  • X. Ji, Y. Tu, W. He, J. Wang, H.-W. Shen, and P.-Y. Yen, “USEVis: Visual analytics of attention-based neural embedding in information retrieval,” Visual Informatics, vol. 5, no. 2, pp. 1–12, 2021.
  • P. Atanasova, J. G. Simonsen, C. Lioma, and I. Augenstein, “A Diagnostic Study of Explainability Techniques for Text Classification,” in Conference on Empirical Methods in Natural Language Processing (EMNLP). Online: Association for Computational Linguistics, nov 2020, pp. 3256–3274.
  • J. DeYoung, S. Jain, N. F. Rajani, E. Lehman, C. Xiong, R. Socher, and B. C. Wallace, “ERASER: A benchmark to evaluate rationalized NLP models,” in Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, jul 2020, pp. 4443–4458.

ML Engeneering

  • Bhatt, U.; Xiang, A.; Sharma, S.; Weller, A.; Taly, A.; Jia, Y.; Ghosh, J.; Puri, R.; Moura, J. M. F.; and Eckersley, P. 2020. Explainable Machine Learning in Deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 648–657. New York, NY, USA: Association for Computing Machinery.
  • N. Kokhlikyan, V. Miglani, M. Martin, E. Wang, B. Alsallakh, J. Reynolds, A. Melnikov, N. Kliushkina, C. Araya, S. Yan, and O. Reblitz-Richardson, “Captum: A unified and generic model interpretability library for PyTorch,” 2020.

Neural Networks

  • Shrikumar, A.; Greenside, P.; and Kundaje, A. 2017. Learning important features through propagating activation differences. In 34th International Conference on Machine Learning, 3145–3153. Sydney, Australia: PMLR.
  • Situ, X.; Zukerman, I.; Paris, C.; Maruf, S.; and Haffari, G. 2021. Learning to explain: Generating stable explanations fast. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 5340–5355.
  • M. Ancona, E. Ceolini, C. Oztireli, and M. Gross, “Towards better understanding of gradient-based attribution methods for Deep Neural Networks,” in International Conference on Learning Representations (ICLR), no. 1711.06104, 2018.
  • D. Alvarez Melis and T. Jaakkola, “Towards Robust Interpretability with Self-Explaining Neural Networks,” in Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., vol. 31. Curran Associates, Inc., 2018.
  • J. Heo, S. Joo, and T. Moon, “Fooling Neural Network Interpretations via Adversarial Model Manipulation,” in Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, Eds., vol. 32. Curran Associates, Inc., 2019.
  • Wojtas, M.; and Chen, K. 2020. Feature importance ranking for deep learning. Advances in Neural Information Processing Systems, 33: 5105–5114.
  • A.-K. Dombrowski, M. Alber, C. Anders, M. Ackermann, K.-R. Muller, and P. Kessel, “Explanations can be manipulated and geometry is to blame,” in Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, Eds., vol. 32. Curran Associates, Inc., 2019.

xai-group's People

Contributors

augusnunes avatar andregdmitri avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.