일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
- 코딩 테스트
- Deep learning
- 설명가능한
- Unsupervised learning
- cs231n
- SmoothGrad
- Interpretability
- 인공지능
- 기계학습
- 백준
- coding test
- Class activation map
- 메타러닝
- 딥러닝
- keras
- grad-cam
- xai
- Artificial Intelligence
- Score-CAM
- 코딩테스트
- 시계열 분석
- Cam
- 머신러닝
- Explainable AI
- meta-learning
- GAN
- AI
- Machine Learning
- python
- 설명가능한 인공지능
- Today
- Total
iMTE
Sanity checks for saliency maps, Equation sheets, [XAI-6 (1)] 본문
Sanity checks for saliency maps, Equation sheets, [XAI-6 (1)]
Wonju Seo 2021. 4. 20. 11:35논문 제목 : Sanity checks for saliency maps
논문 주소 : arxiv.org/abs/1810.03292
주요 수식 정리:
0) Definition
input : $x \in \mathbb{R}^d $
model : $ S : \mathbb{R}^d -> \mathbb{R}^C$, C : the number of classes
1) Gradient with respect to input
$$E_{grad} (x) = \frac{\partial S}{\partial x}$$
2) Gradient $\odot$ Input (Gradient element-wise product with the input)
$$E_{Grad\odot input}(x)=x\odot \frac{\partial S}{\partial x}$$
3) Guided Backpropagation (GBP)
Feature maps derived during the forward pass : $\{f^l, f^{l-1},...,f^0\}$
Intermediate representations obtained during the backward pass : $\{R^l,R^{l-1},...,R^0\}$
$$f^l=relu(f^{l-1})$$
$$ R^{l+1} = \frac{\partial f^{out}}{\partial f^{l+1}}$$
GBP aims to zero out negative gradients during computation of R.
$$R^l=1_{R^{l+1}>0}1_{f^l>0}R^{l+1}$$
위 식의 $1_{R^{l+1}>0}$ 는 positive gradient만 전달, $1_{f^l>0}$은 positive activation만 전달을 의미한다.
4) Integrated Gradients (IG)
$$E_{IG}(x)=(x-\bar x) \times \ \int_0^1 \frac{\partial S(\bar x+\alpha (x-\bar x)}{\partial x}d\alpha$$
$\bar x$는 baseline input으로 주로 zero로 set이 된다.
5) SmoothGrad
$$E_{sg}(x)=\frac{1}{N}\sum_{i=1}^N E(x+g_i), \quad g_i \sim N(0,\sigma^2)$$
6) VarGrad
$V$ : the variance.
$$E_{vg}(x)=V(E(x+g_i)),\quad g_i \sim N(0,\sigma^2)$$
7) GradCAM and Guided GradCAM
$A^k$ : last convolutional layer에서 추출된 feature map
$$\alpha_c^k = \frac{1}{Z}\sum_i \sum_j \frac{\partial S}{\partial A_{ij}^k} $$
$$E_grad = ReLU(\sum_k \alpha_c^k A^k)$$
$$E_{guided-gradcam}(x)=E_{grad}\odot E_{gbp}$$
+
나중에 쉽게 보려고 정리해놨다. (언제 논문 켜서 확인하니..)
'Deep learning study > Explainable AI, 설명가능한 AI' 카테고리의 다른 글
Interpretable and fine-grained visual explanations for CNNs 내용 정리 [XAI-7] (0) | 2021.04.23 |
---|---|
Sanity checks for saliency maps 내용정리 [XAI-6 (2)] (0) | 2021.04.20 |
SmoothGrad : removing noise by adding noise 내용 정리 [XAI-5] (0) | 2021.04.15 |
Smooth Grad-CAM++ 내용 정리 [XAI-4] (0) | 2021.04.14 |
Grad-CAM++ 내용 정리 [XAI-3] (0) | 2021.04.09 |