일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
- 설명가능한 인공지능
- cs231n
- 코딩테스트
- 딥러닝
- 인공지능
- GAN
- Unsupervised learning
- Interpretability
- Artificial Intelligence
- SmoothGrad
- 시계열 분석
- python
- Score-CAM
- Deep learning
- xai
- 백준
- AI
- Machine Learning
- 설명가능한
- 코딩 테스트
- 기계학습
- Class activation map
- meta-learning
- Explainable AI
- 메타러닝
- keras
- grad-cam
- 머신러닝
- coding test
- Cam
- Today
- Total
목록Deep learning study (43)
iMTE
논문 제목 : Stitch it in Time: GAN-Based Facial Editing of Real Videos 논문 주소 : https://arxiv.org/abs/2201.08361 Youtube video : https://www.youtube.com/watch?v=4lQkQSmA8nA Stitch it in Time: GAN-Based Facial Editing of Real Videos The ability of Generative Adversarial Networks to encode rich semantics within their latent space has been widely adopted for facial image editing. However, replicating th..
Deep learning 연구는 한번 놓치면 쉽게 따라잡기가 힘듭니다. 구체적으로는, 갑자기 논문 review가 와서 revision을 해야하거나, 논문의 작업을 집중해야 할 때에는 trend를 따라가려면 노력을 해야합니다. (특히 이 인공지능 분야는 매일 수많은 논문이 나오는 만큼 follow-up은 상당한 노력을 요구합니다.) 앞으로 follow-up 할 연구로 다음을 보고자 합니다. 1) Generative adversarial networks (GAN) 2) Meta learning 3) Transformer-based model 4) Self-supervised learning 물론, 이 모든 분야를 집중해서 보는 것은 어렵겠지만 주요 논문들을 보면서 짧게 정리하는 것을 목적으로 하고자 합니다...
논문 제목: CAMERAS: Enhanced Resolution And Sanity Preserving Class Activation Mapping For Image Saliency 논문 주소: https://arxiv.org/abs/2106.10649 CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input. However, class-inse..
논문 제목: Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis 논문 주소: https://openaccess.thecvf.com/content/CVPR2021W/RCV/html/Poppi_Revisiting_the_Evaluation_of_Class_Activation_Mapping_for_Explainability_A_CVPRW_2021_paper.html CVPR 2021 Open Access Repository Revisiting the Evaluation of Class Activation Mapping for Explainability: A ..
논문 제목 : Towards Better Explanations of Class Activation Mapping 논문 주소 : https://arxiv.org/abs/2102.05228 Towards Better Explanations of Class Activation Mapping Increasing demands for understanding the internal behavior of convolutional neural networks (CNNs) have led to remarkable improvements in explanation methods. Particularly, several class activation mapping (CAM) based methods, which gene..
논문 제목 : Towards Learning Spatially Discriminative Feature Representation 논문 주소 : https://arxiv.org/abs/2109.01359 Towards Learning Spatially Discriminative Feature Representations The backbone of traditional CNN classifier is generally considered as a feature extractor, followed by a linear layer which performs the classification. We propose a novel loss function, termed as CAM-loss, to constrai..
논문 제목 : Informative Class Activation Maps 논문 주소 : https://arxiv.org/abs/2106.10472 Informative Class Activation Maps We study how to evaluate the quantitative information content of a region within an image for a particular label. To this end, we bridge class activation maps with information theory. We develop an informative class activation map (infoCAM). Given a classi arxiv.org 주요 내용 정리: 1) 저..
논문 제목 : Eigen-CAM: Class Activation Map Using Principal Components 논문 주소 : https://arxiv.org/abs/2008.00299 Eigen-CAM: Class Activation Map using Principal Components Deep neural networks are ubiquitous due to the ease of developing models and their influence on other domains. At the heart of this progress is convolutional neural networks (CNNs) that are capable of learning representations or fe..
논문 제목 : Combinational Class Activation Maps for Weakly Supervised Object Localization 논문 주소 : https://openaccess.thecvf.com/content_WACV_2020/html/Yang_Combinational_Class_Activation_Maps_for_Weakly_Supervised_Object_Localization_WACV_2020_paper.html WACV 2020 Open Access Repository Seunghan Yang, Yoonhyung Kim, Youngeun Kim, Changick Kim; Proceedings of the IEEE/CVF Winter Conference on Applica..
논문 제목 : How to Manipulate CNNs to Make Them Lie: the GradCAM Case 논문 주소 : https://arxiv.org/abs/1907.10901 How to Manipulate CNNs to Make Them Lie: the GradCAM Case Recently many methods have been introduced to explain CNN decisions. However, it has been shown that some methods can be sensitive to manipulation of the input. We continue this line of work and investigate the explanation method Gra..