天天看點

【ICLR2019】Oral 論文彙總

ICLR2019 Oral 口頭報告系列文章彙總
【ICLR2019】Oral 論文彙總

BA-Net: Dense Bundle Adjustment Networks

Keywords: Structure-from-Motion, Bundle Adjustment, Dense Depth Estimation

TL;DR: This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature bundle adjustment (BA)

Deterministic Variational Inference for Robust Bayesian Neural Networks

Keywords: Bayesian neural network, variational inference, variational bayes, variance reduction, empirical bayes

TL;DR: A method for eliminating gradient variance and automatically tuning priors for effective training of bayesian neural networks

Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks

Keywords: Deep Learning, Natural Language Processing, Recurrent Neural Networks, Language Modeling

TL;DR: We introduce a new inductive bias that integrates tree structures in recurrent neural networks.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

Keywords: GANs, Generative Models, Large Scale Training, Deep Learning

TL;DR: GANs benefit from scaling up.

Learning deep representations by mutual information estimation and maximization

Keywords: representation learning, unsupervised learning, deep learning

TL;DR: We learn deep representation by maximizing mutual information, leveraging structure in the objective, and are able to compute with fully supervised classifiers with comparable architectures

KnockoffGAN: Generating Knockoffs for Feature Selection using Generative Adversarial Networks

Keywords: Knockoff model, Feature selection, False discovery rate control, Generative Adversarial networks

Learning Protein Structure with a Differentiable Simulator

Keywords: generative models, simulators, molecular modeling, proteins, structured prediction

TL;DR: We use an unrolled simulator as an end-to-end differentiable model of protein structure and show it can (sometimes) hierarchically generalize to unseen fold topologies.

ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness

Keywords: deep learning, psychophysics, representation learning, object recognition, robustness, neural networks, data augmentation

TL;DR: ImageNet-trained CNNs are biased towards object texture (instead of shape like humans). Overcoming this major difference between human and machine vision yields improved detection performance and previously unseen robustness to image distortions.

Smoothing the Geometry of Probabilistic Box Embeddings

Keywords: embeddings, order embeddings, knowledge graph embedding, relational learning

TL;DR: Improve hierarchical embedding models using kernel smoothing

On Random Deep Weight-Tied Autoencoders: Exact Asymptotic Analysis, Phase Transitions, and Implications to Training

Keywords: Random Deep Autoencoders, Exact Asymptotic Analysis, Phase Transitions

TL;DR: We study the behavior of weight-tied multilayer vanilla autoencoders under the assumption of random weights. Via an exact characterization in the limit of large dimensions, our analysis reveals interesting phase transition phenomena.

Meta-Learning Update Rules for Unsupervised Representation Learning

Keywords: Meta-learning, unsupervised learning, representation learning

TL;DR: We learn an unsupervised learning algorithm that produces useful representations from a set of supervised tasks. At test-time, we apply this algorithm to new tasks without any supervision and show performance comparable to a VAE.

Transferring Knowledge across Learning Processes

Keywords: meta-learning, transfer learning

TL;DR: We propose Leap, a framework that transfers knowledge across learning processes by minimizing the expected distance the training process travels on a task's loss surface.

GENERATING HIGH FIDELITY IMAGES WITH SUBSCALE PIXEL NETWORKS AND MULTIDIMENSIONAL UPSCALING

TL;DR: We show that autoregressive models can generate high fidelity images.

Temporal Difference Variational Auto-Encoder

Keywords: generative models, variational auto-encoders, state space models, temporal difference learning

TL;DR: Generative model of temporal data, that builds online belief state, operates in latent space, does jumpy predictions and rollouts of states.

A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs

Keywords: visual system, convolutional neural networks, efficient coding, retina

TL;DR: We reproduced neural representations found in biological visual systems by simulating their neural resource constraints in a deep convolutional model.

Pay Less Attention with Lightweight and Dynamic Convolutions

Keywords: Deep learning, sequence to sequence learning, convolutional neural networks, generative models

TL;DR: Dynamic lightweight convolutions are competitive to self-attention on language tasks.

Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset

Keywords: music, piano transcription, transformer, wavnet, audio synthesis, dataset, midi

TL;DR: We train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure, enabled by the new MAESTRO dataset.

Learning to Remember More with Less Memorization

Keywords: memory-augmented neural networks, writing optimization

Learning Robust Representations by Projecting Superficial Statistics Out

Keywords: domain generalization, robustness

TL;DR: Building on previous work on domain generalization, we hope to produce a classifier that will generalize to previously unseen domains, even when domain identifiers are not available during training.

Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware

Keywords: Trusted hardware, integrity, privacy, secure inference, SGX

TL;DR: We accelerate secure DNN inference in trusted execution environments (by a factor 4x-20x) by selectively outsourcing the computation of linear layers to a faster yet untrusted co-processor.

The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision

Keywords: Neuro-Symbolic Representations, Concept Learning, Visual Reasoning

TL;DR: We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them.

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

Keywords: Neural networks, sparsity, pruning, compression, performance, architecture search

TL;DR: Feedforward neural networks that can have weights pruned after training could have had the same weights pruned before training

FFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models

Keywords: generative models, density estimation, approximate inference, ordinary differential equations

TL;DR: We use continuous time dynamics to define a generative model with exact likelihoods and efficient sampling that is parameterized by unrestricted neural networks.

How Powerful are Graph Neural Networks?

Keywords: graph neural networks, theory, deep learning, representational power, graph isomorphism, deep multisets

TL;DR: We develop theoretical foundations for the expressive power of GNNs and design a provably most powerful GNN.

ref:https://chillee.github.io

【ICLR2019】Oral 論文彙總

pic from pexels.com

繼續閱讀