Understanding the computational difficulty of a binary-weight perceptron and the advantage of input sparseness
Understanding the computational difficulty of a binary-weight perceptron and the advantage of input sparseness
Limited precision of synaptic weights is a key aspect of both biological and hardware implementation of neural networks. To assign low-precise weights during learning is a non-trivial task, but may benefit from representing to-be-learned items using sparse code. However, the computational difficulty resulting from low weight precision and the advantage …