Haim Sompolinsky

Follow

Generating author description...

All published works
Action Title Year Authors
+ PDF Chat Simplified derivations for high-dimensional convex learning problems 2024 David G. Clark
Haim Sompolinsky
+ PDF Chat Diverse capability and scaling of diffusion and auto-regressive models when learning abstract rules 2024 Binxu Wang
Jiaqi Shang
Haim Sompolinsky
+ PDF Chat Robust Learning in Bayesian Parallel Branching Graph Neural Networks: The Narrow Width Limit 2024 Zechen Zhang
Haim Sompolinsky
+ PDF Chat Order parameters and phase transitions of continual learning in deep neural networks 2024 Haozhe Shan
Qianyi Li
Haim Sompolinsky
+ PDF Chat Coding schemes in neural networks learning classification tasks 2024 Alexander van Meegen
Haim Sompolinsky
+ PDF Chat Dissecting the Interplay of Attention Paths in a Statistical Mechanics Theory of Transformers 2024 Lorenzo Tiberi
Francesca Mignacco
Kazuki Irie
Haim Sompolinsky
+ Connecting NTK and NNGP: A Unified Theoretical Framework for Neural Network Learning Dynamics in the Kernel Regime 2023 Yehonatan Avidan
Qianyi Li
Haim Sompolinsky
+ Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds 2023 Michael Kuoch
Chi-Ning Chou
Nikhil Parthasarathy
Joel Dapello
James J. DiCarlo
Haim Sompolinsky
SueYeon Chung
+ PDF Chat Optimal Quadratic Binding for Relational Reasoning in Vector Symbolic Neural Architectures 2022 Naoki Hiratani
Haim Sompolinsky
+ PDF Chat Soft-margin classification of object manifolds 2022 Uri Cohen
Haim Sompolinsky
+ Optimal quadratic binding for relational reasoning in vector symbolic neural architectures 2022 Naoki Hiratani
Haim Sompolinsky
+ Temporal support vectors for spiking neuronal networks 2022 Ran Rubin
Haim Sompolinsky
+ A theory of learning with constrained weight-distribution 2022 Weishun Zhong
Ben Sorscher
Daniel D. Lee
Haim Sompolinsky
+ Globally Gated Deep Linear Networks 2022 Qianyi Li
Haim Sompolinsky
+ PDF Chat Predicting the outputs of finite deep neural networks trained with noisy gradients 2021 Gadi Naveh
Oded Ben David
Haim Sompolinsky
Zohar Ringel
+ PDF Chat Statistical Mechanics of Deep Linear Neural Networks: The Backpropagating Kernel Renormalization 2021 Qianyi Li
Haim Sompolinsky
+ PDF Chat Macroscopic fluctuations emerge in balanced networks with incomplete recurrent alignment 2021 Itamar Daniel Landau
Haim Sompolinsky
+ Predicting the Outputs of Finite Networks Trained with Noisy Gradients 2021 Gadi Naveh
Oded Ben-David
Haim Sompolinsky
Zohar Ringel
+ PDF Chat New role for circuit expansion for learning in neural networks 2021 Julia Steinberg
Madhu Advani
Haim Sompolinsky
+ Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Renormalization Group. 2020 Qianyi Li
Haim Sompolinsky
+ High-dimensional dynamics of generalization error in neural networks 2020 Madhu Advani
Andrew Saxe
Haim Sompolinsky
+ Predicting the outputs of finite deep neural networks trained with noisy gradients 2020 Gadi Naveh
Oded Ben-David
Haim Sompolinsky
Zohar Ringel
+ PDF Chat Path integral approach to random neural networks 2018 A. Crisanti
Haim Sompolinsky
+ PDF Chat Learning Data Manifolds with a Cutting Plane Method 2018 SueYeon Chung
Uri Cohen
Haim Sompolinsky
Daniel D. Lee
+ PDF Chat Classification and Geometry of General Perceptual Manifolds 2018 SueYeon Chung
Daniel D. Lee
Haim Sompolinsky
+ Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity 2017 Ran Rubin
L. F. Abbott
Haim Sompolinsky
+ PDF Chat Linear readout of object manifolds 2016 SueYeon Chung
Daniel D. Lee
Haim Sompolinsky
+ Classification of Manifolds by Single-Layer Neural Networks. 2015 SueYeon Chung
Daniel D. Lee
Haim Sompolinsky
+ PDF Chat Transition to Chaos in Random Neuronal Networks 2015 Jonathan Kadmon
Haim Sompolinsky
+ PDF Chat Dynamics of random neural networks with bistable units 2014 Merav Stern
Haim Sompolinsky
L. F. Abbott
+ PDF Chat Interactions between Intrinsic and Stimulus-Evoked Activity in Recurrent Neural Networks 2011 Larry Abbott
Kanaka Rajan
Haim Sompolinsky
+ PDF Chat Theory of Spike Timing-Based Neural Classifiers 2010 Ran Rubin
RĂ©mi Monasson
Haim Sompolinsky
+ PDF Chat Stimulus-dependent suppression of chaos in recurrent neural networks 2010 Kanaka Rajan
L. F. Abbott
Haim Sompolinsky
+ PDF Chat Stimulus-Dependent Correlations in Threshold-Crossing Spiking Neurons 2009 Yoram Burak
Sam Lewallen
Haim Sompolinsky
+ Stimulus-dependent correlations in threshold-crossing spiking neurons 2009 Yoram Burak
Sam Lewallen
Haim Sompolinsky
+ Interactions between Intrinsic and Stimulus-Evoked Activity in Recurrent Neural Networks 2009 L F Abbott
Kanaka Rajan
Haim Sompolinsky
+ PDF Chat Short-Term Memory in Orthogonal Neural Networks 2004 Olivia L. White
Daniel D. Lee
Haim Sompolinsky
+ PDF Chat Mutual Information of Population Codes and Distance Measures in Probability Space 2001 Kai Kang
Haim Sompolinsky
+ PDF Chat Equilibrium Properties of Temporally Asymmetric Hebbian Plasticity 2001 Jonathan Rubin
Daniel D. Lee
Haim Sompolinsky
+ PDF Chat Statistical Mechanics of Support Vector Networks 1999 Rainer Dietrich
Manfred Opper
Haim Sompolinsky
+ Statistical mechanics of the maximum-likelihood density estimation 1994 Naama Barkai
Haim Sompolinsky
+ Spectrum of Large Random Asymmetric Matrices 1988 H.-J. Sommers
A. Crisanti
Haim Sompolinsky
Y. Stein
+ On the 'naive' mean-field equations for spin glasses 1986 A. J. Bray
Haim Sompolinsky
Clare C. Yu
+ Scaling behavior of diffusion on percolation clusters 1983 Shlomo Havlin
Daniel ben‐Avraham
Haim Sompolinsky
Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ PDF Chat Classification and Geometry of General Perceptual Manifolds 2018 SueYeon Chung
Daniel D. Lee
Haim Sompolinsky
4
+ On the use of the Edgeworth expansion in cosmology I: how to foresee and evade its pitfalls 2017 Elena Sellentin
Andrew H. Jaffe
Alan Heavens
3
+ PDF Chat A mean field view of the landscape of two-layer neural networks 2018 Mei Song
Andrea Montanari
Phan-Minh Nguyen
3
+ PDF Chat Statistical mechanics of complex neural systems and high dimensional data 2013 Madhu Advani
Subhaneil Lahiri
Surya Ganguli
3
+ Neural Tangent Kernel: Convergence and Generalization in Neural Networks 2018 Arthur Paul Jacot
Franck Gabriel
Clément Hongler
3
+ Essentially No Barriers in Neural Network Energy Landscape 2018 Felix Draxler
Kambis Veschgini
Manfred Salmhofer
Fred A. Hamprecht
3
+ PDF Chat Linear readout of object manifolds 2016 SueYeon Chung
Daniel D. Lee
Haim Sompolinsky
3
+ PDF Chat Stimulus-dependent suppression of chaos in recurrent neural networks 2010 Kanaka Rajan
L. F. Abbott
Haim Sompolinsky
3
+ PDF Chat Transition to Chaos in Random Networks with Cell-Type-Specific Connectivity 2015 Johnatan Aljadeff
Merav Stern
Tatyana O. Sharpee
3
+ PDF Chat Transition to Chaos in Random Neuronal Networks 2015 Jonathan Kadmon
Haim Sompolinsky
3
+ Bayesian Learning via Stochastic Gradient Langevin Dynamics 2011 Max Welling
Yee Whye Teh
3
+ PDF Chat Storage of correlated patterns in a perceptron 1995 B. LĂłpez
Michael Schröder
Manfred Opper
2
+ PDF Chat Statistical Mechanics of Optimal Convex Inference in High Dimensions 2016 Madhu Advani
Surya Ganguli
2
+ Finite Depth and Width Corrections to the Neural Tangent Kernel 2019 Boris Hanin
Mihai Nica
2
+ High-dimensional dynamics of generalization error in neural networks 2020 Madhu Advani
Andrew Saxe
Haim Sompolinsky
2
+ Asymptotics of Wide Networks from Feynman Diagrams 2020 Ethan Dyer
Guy Gur-Ari
2
+ Finite Versus Infinite Neural Networks: an Empirical Study 2020 Jaehoon Lee
Samuel S. Schoenholz
Jeffrey Pennington
Ben Adlam
Lechao Xiao
Roman Novak
Jascha Sohl‐Dickstein
2
+ On Lazy Training in Differentiable Programming 2018 LĂ©naĂŻc Chizat
Edouard Oyallon
Francis Bach
2
+ Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes 2018 Roman Novak
Lechao Xiao
Jaehoon Lee
Yasaman Bahri
Greg Yang
Jiri Hron
Daniel A. Abolafia
Jeffrey Pennington
Jascha Sohl‐Dickstein
2
+ Reconciling modern machine-learning practice and the classical bias–variance trade-off 2019 Mikhail Belkin
Daniel Hsu
Siyuan Ma
Soumik Mandal
2
+ A Simple Baseline for Bayesian Uncertainty in Deep Learning 2019 Wesley J. Maddox
Timur Garipov
Pavel Izmailov
Dmitry Vetrov
Andrew Gordon Wilson
2
+ Rank correlation and product-moment correlation 1948 P. A. P. Moran
2
+ PDF Chat Response of Spiking Neurons to Correlated Inputs 2002 RubĂ©n Moreno‐Bote
Jaime de la Rocha
Alfonso Renart
NĂ©stor Parga
2
+ Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime 2020 StĂ©phane d’Ascoli
Maria Refinetti
Giulio Biroli
Florent KrząkaƂa
2
+ Towards Understanding the Role of Over-Parametrization in Generalization of Neural Networks 2018 Behnam Neyshabur
Zhiyuan Li
Srinadh Bhojanapalli
Yann LeCun
Nathan Srebro
2
+ The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies 2019 Ronen Basri
David Jacobs
Yoni Kasten
Shira Kritchman
2
+ Deep Neural Networks as Gaussian Processes 2018 Jaehoon Lee
Yasaman Bahri
Roman Novak
Samuel S. Schoenholz
Jeffrey Pennington
Jascha Sohl‐Dickstein
2
+ PDF Chat Properties of networks with partially structured and partially random connectivity 2015 Yashar Ahmadian
Francesco Fumarola
Kenneth D. Miller
2
+ Non-Gaussian processes and neural networks at finite widths 2019 Sho Yaida
2
+ The large learning rate phase of deep learning: the catapult mechanism 2020 Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Sohl‐Dickstein
Guy Gur-Ari
2
+ Deep Neural Networks as Gaussian Processes 2017 Jaehoon Lee
Yasaman Bahri
Roman Novak
Samuel S. Schoenholz
Jeffrey Pennington
Jascha Sohl‐Dickstein
2
+ Consistency and fluctuations for stochastic gradient Langevin dynamics 2016 Yee Whye Teh
Alexandre H. Thiéry
Sebastian J. Vollmer
2
+ Gaussian Process Behaviour in Wide Deep Neural Networks 2018 Alexander Matthews
Mark Rowland
Jiri Hron
Richard E. Turner
Zoubin Ghahramani
2
+ PDF Chat Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition 2014 Charles F. Cadieu
Ha Hong
Daniel Yamins
Nicolas Pinto
Diego Ardila
Ethan A. Solomon
Najib J. Majaj
James J. DiCarlo
2
+ PDF Chat Deep learning in neural networks: An overview 2014 JĂŒrgen Schmidhuber
2
+ Train longer, generalize better: closing the generalization gap in large batch training of neural networks 2017 Elad Hoffer
Itay Hubara
Daniel Soudry
2
+ Don't Decay the Learning Rate, Increase the Batch Size 2017 Samuel Smith
Pieter-Jan Kindermans
Chris Ying
Quoc V. Le
2
+ PDF Chat Statistical mechanics of learning with soft margin classifiers 2001 SebastiĂĄn Risau-Gusman
Mirta B. Gordon
2
+ PDF Chat Spectral Analysis of Large Dimensional Random Matrices 2009 Zhidong Bai
Jack W. Silverstein
2
+ PDF Chat The Correlation Structure of Local Neuronal Networks Intrinsically Results from Recurrent Dynamics 2014 Moritz Helias
Tom Tetzlaff
Markus Diesmann
2
+ PDF Chat Auto- and Crosscorrelograms for the Spike Response of Leaky Integrate-and-Fire Neurons with Slow Synapses 2006 RubĂ©n Moreno‐Bote
NĂ©stor Parga
2
+ PDF Chat Scaling description of generalization with number of parameters in deep learning 2020 Mario Geiger
Arthur Paul Jacot
Stefano Spigler
Franck Gabriel
Levent Sagun
StĂ©phane d’Ascoli
Giulio Biroli
Clément Hongler
Matthieu Wyart
2
+ PDF Chat A Balanced Memory Network 2007 Yasser Roudi
Peter E. Latham
2
+ On the Spectral Bias of Neural Networks 2018 Nasim Rahaman
Aristide Baratin
Devansh Arpit
Felix Draxler
Min Lin
Fred A. Hamprecht
Yoshua Bengio
Aaron Courville
2
+ The generalization error of random features regression: Precise asymptotics and double descent curve 2019 Mei Song
Andrea Montanari
2
+ Critically branched chains and percolation clusters 1980 Z. Alexandrowicz
1
+ PDF Chat Geometric analogue of holographic reduced representation 2009 Diederik Aerts
Marek Czachor
Bart De Moor
1
+ Statistical Ensembles of Complex, Quaternion, and Real Matrices 1965 J. Ginibre
1
+ PDF Chat Correlations and Synchrony in Threshold Neuron Models 2010 Tatjana Tchumatchenko
A. Yu. Malyshev
T. Geisel
Maxim Volgushev
Fred Wolf
1
+ PDF Chat Theory of Spike Timing-Based Neural Classifiers 2010 Ran Rubin
RĂ©mi Monasson
Haim Sompolinsky
1