Abstract
This study investigated how computer agents’ language style affects summary writing in an Intelligent Tutoring System, called CSAL AutoTutor. Participants interacted with two computer agents in one of three language styles: (1) a formal language style, (2) an informal language style, and (3) a mixed language style. Primary results indicated that participants improved the quality of summary writing, spent less time writing summaries, and had lower syntactic complexity but more non-narrative summaries on posttest than pretest. However, this difference was not affected by the discourse formality that agents used during instruction. Results also showed participants rated peer summaries more accurately for cause/effect texts in the formal and mixed conditions, but generated summaries with lower referential cohesion in the informal condition on posttest than pretest.