ICML-21 待读的 Paper

2021.6.3

ICML 2021官方发布接收论文,共有5513篇论文投稿,共有1184篇接受(包括1018篇短论文和166篇长论文),接受率21.48%。

具体list 见: ICML-21 Accepted paper list

Interesting paper

  • A statistical perspective on distillation

  • Model Fusion for Personalized Learning

  • Learning Bounds for Open-Set Learning

  • Learning from the Crowd with Pairwise Comparison

  • Generalization Bounds in the Presence of Outliers: a Median-of-Means Study

  • SiameseXML: Siamese Networks meet Extreme Classifiers with 100M Labels

  • Progressive Learning for Convolutional Neural Networks

  • On the price of explainability for some clustering problems

  • Learning Curves for Analysis of Deep Networks

  • Self-Tuning for Data-Efficient Deep Learning

  • Adversarial robustness guarantees for random deep neural networks

  • AutoSampling: Search for Effective Data Sampling Schedules

  • Soft then Hard: Rethinking the Quantization in Neural Image Compression

  • Implicit Bias of Linear RNNs

  • Break-It-Fix-It: Learning to Repair Code from Unlabeled Data

  • Classification with Rejection Based on Cost-sensitive Classification

  • Oblivious Sketching for Logistic Regression

  • One Pass Late Fusion Multi-view Clustering

  • Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models

  • Attention is not all you need: pure attention loses rank doubly exponentially with depth

  • Pointwise Binary Classification with Pairwise Confidence Comparisons (Feng Lei 这个人去了重庆大学)

  • Learning from Similarity-Confidence Data

  • Towards Understanding Learning in Neural Networks with Linear Teachers

  • Leveraged Weighted Loss for Partial Label Learning

  • Active Testing: Sample-Efficient Model Evaluation

  • Robust Unsupervised Learning via L-statistic Minimization

  • RATT: Leveraging Unlabeled Data to Guarantee Generalization

  • Sharper Generalization Bounds for Clustering

  • Towards Better Robust Generalization with Shift Consistency Regularization

  • Dash: Semi-Supervised Learning with Dynamic Thresholding

  • Sinkhorn Label Allocation: Semi-Supervised Classification via Annealed Self-Training

  • Locally Adaptive Label Smoothing Improves Predictive Churn

Noisy labels:

  • Lower-bounded proper losses for weakly supervised classification

  • Disambiguation of Weak Supervision leading to Exponential Convergence rates

  • Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels

  • Label Distribution Learning Machine

  • Discriminative Complementary-Label Learning with Weighted Loss

  • Multi-Dimensional Classification via Sparse Label Encoding

  • On the Inherent Regularization Effects of Noise Injection During Training

  • Provably End-to-end Label-noise Learning without Anchor Points

  • Improved OOD Generalization via Adversarial Training and Pretraing

  • Can Subnetwork Structure Be the Key to Out-of-Distribution Generalization?

  • A General Framework For Detecting Anomalous Inputs to DNN Classifiers

  • Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization

  • The importance of understanding instance-level noisy labels

  • Confidence Scores Make Instance-dependent Label-noise Learning Possible

  • Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization

  • Wasserstein Distributional Normalization For Robust Distributional Certification of Noisy Labeled Data

  • Understanding and Mitigating Accuracy Disparity in Regression

  • Revealing the Structure of Deep Neural Networks via Convex Duality

  • Learning from Biased Data: A Semi-Parametric Approach

  • Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels

  • Learning Deep Neural Networks under Agnostic Corrupted Supervision

  • Learning from Noisy Labels with No Change to the Training Process

  • Adversarial Multi Class Learning under Weak Supervision with Performance Guarantees

OOD:

  • Amortized Conditional Normalized Maximum Likelihood: Reliable Out of Distribution Uncertainty Estimation
  • Delving into Deep Imbalanced Regression
  • Matrix Sketching for Secure Collaborative Machine Learning
  • A Collective Learning Framework to Boost GNN Expressiveness for Node Classification
  • Out-of-Distribution Generalization via Risk Extrapolation (REx)
  • Don’t Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification
  • Graph Convolution for Semi-Supervised Classification: Improved Linear Separability and Out-of-Distribution Generalization
  • Failure Modes and Opportunities in Out-of-distribution Detection with Deep Generative Models

GNN

  • On Explainability of Graph Neural Networks via Subgraph Explorations
  • GRAND: Graph Neural Diffusion
  • Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth
  • Information Obfuscation of Graph Neural Networks
  • Generative Causal Explanations for Graph Neural Networks
  • How Framelets Enhance Graph Neural Networks
  • GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training
  • Let's Agree to Degree: Comparing Graph Convolutional Networks in the Message-Passing Framework
  • Memory-Efficient Graph Neural Networks
  • A Unified Lottery Ticket Hypothesis for Graph Neural Networks
  • Directional Graph Networks
  • Graph Contrastive Learning Automated
  • Automated Graph Representation Learning with Hyperparameter Importance Explanation
  • E(n) Equivariant Graph Neural Networks
  • Breaking the Limits of Message Passing Graph Neural Networks
  • DeepWalking Backwards: From Embeddings Back to Graphs
  • Elastic Graph Neural Networks
  • Graph Neural Networks Inspired by Classical Iterative Algorithms

Contrastive learning

  • Large-Margin Contrastive Learning with Distance Polarization Regularizer

  • CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and Patients

  • Self-supervised Graph-level Representation Learning with Local and Global Structure

  • Towards Domain-Agnostic Contrastive Learning

  • Unsupervised Representation Learning via Neural Activation Coding

  • Whitening for Self-Supervised Representation Learning

  • Barlow Twins: Self-Supervised Learning via Redundancy Reduction

  • Self-Damaging Contrastive Learning

  • Contrastive Learning Inverts the Data Generating Process

  • Dissecting Supervised Constrastive Learning

  • Neighborhood Contrastive Learning Applied to Online Patient Monitoring

  • Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning

  • Function Contrastive Learning of Transferable Meta-Representations

  • Understanding self-supervised learning dynamics without contrastive pairs

  • ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision

推荐搜索

  • Rethinking Neural vs. Matrix-Factorization Collaborative Filtering: the Theoretical Perspectives
  • Meta Latents Learning for Open-World Recommender Systems
  • Learning Self-Modulating Attention in Continuous Time Space with Applications to Sequential Recommendation
  • Quantifying Availability and Discovery in Recommender Systems via Stochastic Reachability
  • Correcting Exposure Bias for Link Recommendation
  • Estimating α-Rank from A Few Entries with Low Rank Matrix Completion
  • Quantifying Availability and Discovery in Recommender Systems via Stochastic Reachability
  • Follow-the-Regularizer-Leader Routes to Chaos in Routing Games
  • Matrix Completion with Model-free Weighting
  • Correcting Exposure Bias for Link Recommendation
    Meta Latents Learning for Open-World Recommender Systems

Oops!

  • LAMDA: Label Matching Deep Domain Adaptation
  • Making Paper Reviewing Robust to Bid Manipulation Attacks
posted @ 2021-06-05 21:44  Gelthin  阅读(945)  评论(0编辑  收藏  举报