ABSTRACT
The adaptation of membership functions in a fuzzy system is a nonlinear optimization problem. Thus, the convergence of online learning algorithms is questionable. We demonstrate the convergence problems by analyzing two types of spikes, the narrow basis function spikes and the non-monotonic basis function spikes, which can occur during the online adaptation. Further, we show how these spikes can be avoided by restricting the parameter variations of the widths and the distances of the membership functions. According to these restrictions we have to conclude that in most cases it is better solely to adapt the rule conclusions than to adapt the membership functions.