Next: Hidden Layer Representations
Up: Neural Network Learning
Previous: Representational Power of Feedforward
- every possible assignment of network weights represents a
syntactically different hypothesis
- n-dimensional Euclidean space of the n network weights
- This hypothesis space is continuous
- since E is differentiable with respect to the continuous
parameters, we have a well-defined error gradient
- Inductive Bias depends on interplay between gradient descent
search and the way the weight space spans the space of representable functions
- roughly - smooth interpolation between data points
- Given two positive training instances with no negatives between
them, Backprop will tend to label the points between as positive
Patricia Riddle
Fri May 15 13:00:36 NZST 1998