paper list
읽은 paper list
Huiwon
2021. 8. 1. 13:05
728x90
(Sort topics alphabetically)
- Active Learning
- Learning Loss for Active Learning(CVPR, 2019) https://arxiv.org/pdf/1905.03677.pdf
- Anomaly Detection
-1. Unsupervised
- Towards Total Recall in Industrial Anomaly Detection(, 2021) https://arxiv.org/pdf/2106.08265.pdf
- FastFlow: Unsupervised Anomaly Detection and Localization(, 2021) https://arxiv.org/pdf/2111.07677.pdf
- CNN(Convolutional Neural Network) architectures
- Deep Residual Learning for Image Recognition(CVPR, 2016): Resnet https://arxiv.org/pdf/1512.03385.pdf
- Continual Learning
- Gradient Episodic Memory for Continual Learning(NeurIPS, 2017) https://arxiv.org/pdf/1706.08840.pdf
- Data augmentation
- M2m: Imbalanced Classification via Major-to-minor Translation(CVPR, 2020) https://arxiv.org/pdf/2004.00431.pdf
- Adversarial Examples Improve Image Recognition(CVPR, 2020) https://arxiv.org/pdf/1911.09665.pdf
- GNN(Graphical Neural Network)
- 1. Analysis
- What Graph Neural Networks cannot Learn: Depth vs Width(ICLR, 2020) https://openreview.net/pdf?id=B1l2bp4YwS
- 2. GCN(Graph Convolutional Network)
- Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering(NIPS, 2016) https://arxiv.org/pdf/1606.09375.pdf
- Hyper-parameter optimization
- Random Search for Hyper-parameter optimization(JMLR, 2012) https://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf
- Interpretable Machine Learning
- Getting a CLUE: A Method for Explaining Uncertainty Estimates(ICLR, 2021) https://arxiv.org/pdf/2006.06848.pdf
- Meta-Learning(/Few-shot learning)
- Model-Agnostic Meta-learning for Fast Adaptation of Deep Networks(ICML, 2017) https://arxiv.org/pdf/1703.03400.pdf
- Prototypical Networks for Few-shot Learning(NIPS, 2017) https://arxiv.org/pdf/1703.05175.pdf
- Meta-Learning with Latent Embedding Optimization(ICLR, 2019) https://arxiv.org/pdf/1807.05960.pdf
- Few-shot Learning via Embedding Adaptation with Set-to-Set Functions(CVPR, 2020) https://arxiv.org/pdf/1812.03664.pdf
- Free Lunch for Few-shot Learning: Distribution Calibration(ICLR, 2021) https://arxiv.org/pdf/2101.06395.pdf
- Normalizing Flow
- Glow: Generative Flow with Invertible $1 \times 1$ Convolutions(NIPS, 2018) https://arxiv.org/pdf/1807.03039.pdf
- Optimizer
- ADADELTA: An Adaptive Learning Rate Method(CoRR, 2012) https://arxiv.org/pdf/1212.5701.pdf
- Out-of-Distribution detection
- A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks(ICLR, 2017) https://arxiv.org/pdf/1610.02136.pdf
- A simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks(NIPS, 2018) https://arxiv.org/pdf/1807.03888.pdf
- RNN(Recurrent Neural Network) architectures
- Sequence to Sequence Learning with Neural Networks(NIPS, 2014) https://arxiv.org/pdf/1409.3215.pdf
- Self-supervised Learning
- Momentum Contrast for Unsupervised Visual Representation(CVPR, 2020) https://arxiv.org/pdf/1911.05722.pdf
- A Simple Framework for Contrastive Learning of Visual Representation(ICML, 2020) https://arxiv.org/pdf/2002.05709.pdf
- Unsupervised Learning of Visual Features by Contrasting Cluster Assignments(NeurIPS, 2020) https://arxiv.org/pdf/2006.09882.pdf
- SEED: Self-supervised Distillation for Visual Representation(ICLR, 2021) https://arxiv.org/pdf/2101.04731.pdf
- ReSSL: Relational Self-supervised Learning with Weak Augmentation(NIPS, 2021) https://arxiv.org/pdf/2107.09282.pdf
- Emerging Properties in Self-supervised Vision Transformers(ICCV, 2021) https://arxiv.org/pdf/2104.14294.pdf
- Self-supervised Learning by Estimating Twin Class Distributions(, 2022) https://arxiv.org/pdf/2110.07402.pdf
- Semi-supervised Learning
- MixMatch: A Holistic Approach to Semi-supervised Learning(NeurIPS, 2019) https://arxiv.org/pdf/1905.02249.pdf
- Test Time Adaptation
- TENT: Fully Test-Time Adaptation by Entropy Minimization(ICLR, 2021) https://arxiv.org/pdf/2006.10726.pdf
- MEMO: Test Time Robustness via Adaptation and Augmentation(NeurIPS, 2021) https://arxiv.org/pdf/2110.09506.pdf
- Transformer
- 1. NLP
- Attention is All You Need(NeurIPS, 2017) https://arxiv.org/pdf/1706.03762.pdf
- 2. Vision
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale(ICLR, 2021) https://arxiv.org/pdf/2010.11929.pdf
- Training data-efficient image transformers & distillation through attention(ICML, 2021) https://arxiv.org/pdf/2012.12877.pdf
- Unsupervised Meta-learning(Few-shot learning)
- Unsupervised Learning via Meta-learning(ICLR, 2019) https://arxiv.org/pdf/1810.02334.pdf
- Unsupervised Meta-learning for Few-shot Image Classification(NIPS, 2019) https://arxiv.org/pdf/1811.11819.pdf
- Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-learning(ICLR, 2021) https://openreview.net/pdf?id=wS0UFjsNYjn