Leslie N. Smith

Follow

Generating author description...

All published works
Action Title Year Authors
+ PDF Chat Symmetry constrained neural networks for detection and localization of damage in metal plates 2024 James Amarel
Christopher Rudolf
Athanasios Iliopoulos
John G. Michopoulos
Leslie N. Smith
+ PDF Chat Building One-Shot Semi-Supervised (BOSS) Learning Up to Fully Supervised Performance 2022 Leslie N. Smith
Adam Conovaloff
+ General Cyclical Training of Neural Networks 2022 Leslie N. Smith
+ Cyclical Focal Loss 2022 Leslie N. Smith
+ PDF Chat An Approach to Partial Observability in Games: Learning to Both Act and Observe 2021 Elizabeth Gilmour
Noah Plotkin
Leslie N. Smith
+ Empirical Perspectives on One-Shot Semi-supervised Learning 2020 Leslie N. Smith
Adam Conovaloff
+ FROST: Faster and more Robust One-shot Semi-supervised Training 2020 Helena E. Liu
Leslie N. Smith
+ Building One-Shot Semi-supervised (BOSS) Learning up to Fully Supervised Performance 2020 Leslie N. Smith
A. Conovaloff
+ Empirical Perspectives on One-Shot Semi-supervised Learning 2020 Leslie N. Smith
Adam Conovaloff
+ A Useful Taxonomy for Adversarial Robustness of Neural Networks 2019 Leslie N. Smith
+ PDF Chat Super-convergence: very fast training of neural networks using large learning rates 2019 Leslie N. Smith
Nicholay Topin
+ A Useful Taxonomy for Adversarial Robustness of Neural Networks 2019 Leslie N. Smith
+ A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay 2018 Leslie N. Smith
+ Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates 2017 Leslie N. Smith
Nicholay Topin
+ Best Practices for Applying Deep Learning to Novel Applications. 2017 Leslie N. Smith
+ PDF Chat Cyclical Learning Rates for Training Neural Networks 2017 Leslie N. Smith
+ Exploring loss function topology with cyclical learning rates 2017 Leslie N. Smith
Nicholay Topin
+ Best Practices for Applying Deep Learning to Novel Applications 2017 Leslie N. Smith
+ Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates 2017 Leslie N. Smith
Nicholay Topin
+ PDF Chat Gradual DropIn of Layers to Train Very Deep Neural Networks 2016 Leslie N. Smith
Emily M. Hand
Timothy Doster
+ Deep Convolutional Neural Network Design Patterns 2016 Leslie N. Smith
Nicholay Topin
+ Gradual DropIn of Layers to Train Very Deep Neural Networks 2015 Leslie N. Smith
Emily M. Hand
Timothy Doster
+ No More Pesky Learning Rate Guessing Games. 2015 Leslie N. Smith
+ Cyclical Learning Rates for Training Neural Networks 2015 Leslie N. Smith
+ Convolutional Architecture Exploration for Action Recognition and Image Classification 2015 Jack Turner
David W. Aha
Leslie N. Smith
Kalyan Moy Gupta
+ Cyclical Learning Rates for Training Neural Networks 2015 Leslie N. Smith
+ Gradual DropIn of Layers to Train Very Deep Neural Networks 2015 Leslie N. Smith
Emily M. Hand
Timothy Doster
+ PDF Chat How to find real-world applications of compressive sensing 2013 Leslie N. Smith
Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ PDF Chat ImageNet Large Scale Visual Recognition Challenge 2015 Olga Russakovsky
Jia Deng
Hao Su
Jonathan Krause
Sanjeev Satheesh
Sean Ma
Zhiheng Huang
Andrej Karpathy
Aditya Khosla
Michael S. Bernstein
6
+ Very Deep Convolutional Networks for Large-Scale Image Recognition 2014 Karen Simonyan
Andrew Zisserman
5
+ PDF Chat Cyclical Learning Rates for Training Neural Networks 2017 Leslie N. Smith
5
+ ADADELTA: An Adaptive Learning Rate Method 2012 Matthew D. Zeiler
4
+ PDF Chat Show and tell: A neural image caption generator 2015 Oriol Vinyals
Alexander Toshev
Samy Bengio
Dumitru Erhan
4
+ PDF Chat Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation 2014 Ross Girshick
Jeff Donahue
Trevor Darrell
Jitendra Malik
4
+ ADASECANT: Robust Adaptive Secant Method for Stochastic Gradient 2014 Çaǧlar Gülçehre
Yoshua Bengio
4
+ No More Pesky Learning Rate Guessing Games. 2015 Leslie N. Smith
4
+ PDF Chat Deep Residual Learning for Image Recognition 2016 Kaiming He
Xiangyu Zhang
Shaoqing Ren
Jian Sun
4
+ An Empirical Evaluation of Deep Learning on Highway Driving 2015 Brody Huval
Tao Wang
Sameep Tandon
Jeff Kiske
Will Song
Joel Pazhayampallil
Mykhaylo Andriluka
Pranav Rajpurkar
Toki Migimatsu
Royce Cheng-Yue
4
+ Sequence to Sequence Learning with Neural Networks 2014 Ilya Sutskever
Oriol Vinyals
Quoc V. Le
3
+ No More Pesky Learning Rates 2012 Tom Schaul
Sixin Zhang
Yann LeCun
3
+ Improving neural networks by preventing co-adaptation of feature detectors 2012 Geoffrey E. Hinton
Nitish Srivastava
Alex Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
3
+ Qualitatively characterizing neural network optimization problems 2014 Ian Goodfellow
Oriol Vinyals
Andrew Saxe
3
+ Wide Residual Networks 2016 Sergey Zagoruyko
Nikos Komodakis
3
+ Don't Decay the Learning Rate, Increase the Batch Size 2017 Samuel Smith
Pieter-Jan Kindermans
Chris Ying
Quoc V. Le
3
+ Three Factors Influencing Minima in SGD 2017 Stanisław Jastrzȩbski
Zachary Kenton
Devansh Arpit
Nicolas Ballas
Asja Fischer
Yoshua Bengio
Amos Storkey
3
+ FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence 2020 Kihyuk Sohn
David Berthelot
Chun‐Liang Li
Zizhao Zhang
Nicholas Carlini
Ekin D. Cubuk
А.В. Куракин
Han Zhang
Colin Raffel
3
+ ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring 2019 David Berthelot
Nicholas Carlini
Ekin D. Cubuk
А.В. Куракин
Kihyuk Sohn
Han Zhang
Colin Raffel
3
+ PDF Chat Going deeper with convolutions 2015 Christian Szegedy
Wei Liu
Yangqing Jia
Pierre Sermanet
Scott Reed
Dragomir Anguelov
Dumitru Erhan
Vincent Vanhoucke
Andrew Rabinovich
3
+ RandAugment: Practical automated data augmentation with a reduced search space 2019 Ekin D. Cubuk
Barret Zoph
Jonathon Shlens
Quoc V. Le
3
+ RandAugment: Practical data augmentation with no separate search. 2019 Ekin D. Cubuk
Barret Zoph
Jonathon Shlens
Quoc V. Le
3
+ SGDR: Stochastic Gradient Descent with Warm Restarts 2016 Ilya Loshchilov
Frank Hutter
3
+ Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 2015 Sergey Ioffe
Christian Szegedy
3
+ Self-training with Noisy Student improves ImageNet classification 2019 Qizhe Xie
Minh-Thang Luong
Eduard Hovy
Quoc V. Le
3
+ Densely Connected Convolutional Networks 2016 Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
3
+ Deep Networks with Stochastic Depth 2016 Gao Huang
Yu Sun
Zhuang Liu
Daniel Sedra
Kilian Q. Weinberger
3
+ Unsupervised Data Augmentation for Consistency Training 2019 Qizhe Xie
Zihang Dai
Eduard Hovy
Minh-Thang Luong
Quoc V. Le
2
+ Matching networks for one shot learning 2016 Oriol Vinyals
Charles Blundell
Timothy Lillicrap
Koray Kavukcuoglu
Daan Wierstra
2
+ No More Pesky Learning Rates 2012 Tom Schaul
Sixin Zhang
Yann LeCun
2
+ PDF Chat Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification 2015 Kaiming He
Xiangyu Zhang
Shaoqing Ren
Jian Sun
2
+ Hot Swapping for Online Adaptation of Optimization Hyperparameters 2014 Kevin Bache
Dennis DeCoste
Padhraic Smyth
2
+ The Effects of Hyperparameters on SGD Training of Neural Networks 2015 Thomas M. Breuel
2
+ Identity Mappings in Deep Residual Networks 2016 Kaiming He
Xiangyu Zhang
Shaoqing Ren
Jian Sun
2
+ Going Deeper with Convolutions 2014 Christian Szegedy
Wei Liu
Yangqing Jia
Pierre Sermanet
Scott Reed
Dragomir Anguelov
Dumitru Erhan
Vincent Vanhoucke
Andrew Rabinovich
2
+ PDF Chat Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning 2018 Takeru Miyato
Shin‐ichi Maeda
Masanori Koyama
Shin Ishii
2
+ Assume, Augment and Learn: Unsupervised Few-Shot Meta-Learning via Random Labels and Data Augmentation 2019 Antreas Antoniou
Amos Storkey
2
+ A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay 2018 Leslie N. Smith
2
+ Interpolation Consistency Training for Semi-Supervised Learning 2019 Vikas Verma
Kenji Kawaguchi
Alex Lamb
Juho Kannala
Yoshua Bengio
David López-Paz
2
+ Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes 2017 Lei Wu
Zhanxing Zhu
E Weinan
2
+ Possible Mechanisms for Neural Reconfigurability and their Implications 2015 Thomas M. Breuel
2
+ Improved Regularization of Convolutional Neural Networks with Cutout 2017 Terrance DeVries
Graham W. Taylor
2
+ Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour 2017 Priya Goyal
Piotr Dollár
Ross Girshick
Pieter Noordhuis
Lukasz Wesolowski
Aapo Kyrola
Andrew Tulloch
Yangqing Jia
Kaiming He
2
+ Noisy Networks for Exploration 2017 Meire Fortunato
Mohammad Gheshlaghi Azar
Bilal Piot
Jacob Menick
Ian Osband
Alex Graves
Vlad Mnih
Rémi Munos
Demis Hassabis
Olivier Pietquin
2
+ S4L: Self-Supervised Semi-Supervised Learning 2019 Xiaohua Zhai
Avital Oliver
Alexander Kolesnikov
Lucas Beyer
2
+ Unsupervised Learning via Meta-Learning 2018 Kyle Hsu
Sergey Levine
Chelsea Finn
2
+ Random Walk Initialization for Training Very Deep Feedforward Networks 2014 David Sussillo
L. F. Abbott
2
+ Prototypical Networks for Few-shot Learning 2017 Jake Snell
Kevin Swersky
Richard S. Zemel
2
+ Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning 2016 Qianli Liao
Kenji Kawaguchi
Tomaso Poggio
2
+ An empirical analysis of the optimization of deep network loss surfaces 2016 Daniel Jiwoong Im
Michael Tao
Kristin Branson
2