General information
Требуемые условия завершения
This seminar will consists in recent paper presentations by group of students, with a particular emphasis on re-implementing the proposed methods. It is recommended, but not mandatory to follow the Explainable AI lectures in parallel (WS24_08134600).
If you plan on participating, please pick a paper from the list below and write me an email before November 1 EOD. First come first served. The number of students per group is limited to three (3), I will edit the list if this limit is reached.
List of proposed papers:
- Bianchi, De Santis, Tocchetti, Brambilla, Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification, IJCAI, 2024
- Deiseroth, Deb, Weinbach, Brack, Schramowski, Kersting, ATMAN: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation, NeurIPS, 2023
- Fokkema, de Heide, van Erven, Attribution-based Explanations that Provide Recourse Cannot be Robust, Journal of Machine Learning Research, 2023
- Humayun, Balestriero, Balakrishnan, Baraniuk, SplineCam: Exact visualization and characterization of deep network geometry and decision boundaries, CVPR, 2023
- Paes, Wei, Calmon, Selective Explanations, NeurIPS, 2024
- Oikarinen, Weng, Linear Explanations for Individual Neurons, ICML, 2024
Последнее изменение: понедельник, 21 октября 2024, 17:31