Abstract
We study learning from examples in higher-order perceptrons, which can realize polynomially separable rules. We first calculate the storage capacity of random binary patterns. It is found that the storage capacity is a monotonically increasing function of the relative weight parameter of the highest-order monomial term. We also analyse the generalization ability of higher-order perceptrons when they are trained by examples drawn from a realizable rule. Unlike their first-order counterparts, high-order perceptrons are found to exhibit stepwise learning as a function of the number of training examples.