Abstract
This article describes recent work in the area of music and artificial intelligence (AI) that aims at developing working models of musical learning. An implemented AI program is presented that learns general rules of expressive performance (for dynamics and rubato) from examples of human performances. The method improves upon previous research by the author (Widmer 1995a), where expression rules were learned at the level of individual notes with the help of an explicit background model which expressed general knowledge about structural relationships. In the new approach, the note level is abandoned in favor of a direct abstraction to the level of musical structures. The system learns to associate expressive shapes with musical structures, which leads to smoother and more balanced performances and is also more plausible from a musical point of view. An additional effect of this strategy is that expressive patterns can be learned at multiple structural levels simultaneously. The article presents some experimental results with piano performances by the author, but also a more extended study with real performances by a number of famous pianists. Qualitative and quantitative evaluation indicates a clear superiority of the structure‐level approach over trying to learn expression at the note level. But the general approach still suffers from severe limitations, which are discussed towards the end of the article.