Abstract
The Landscape Model of text comprehension was extended to the comprehension of audiovisual discourse from text and video TV news stories. Concepts from the story were coded for activation after each sequence, creating a matrix of activations that was reduced to a vector of the degree of total activation for each concept. In Study 1, the degree vector correlated well with participants' ratings of how much the sequence made them think of each concept. In Study 2, the degree vector, vectors based on the number of activations, and the degree of co-activation were used to predict participants' recall. The model predicted recall for the text version well, but only moderately well for the video version. The Landscape Model was modified using Dual Code Theory by coding and analyzing audio and visual information as separate components. It predicted students' recall well, indicating its robustness as a model of discourse processing.
Notes
1Another way to analyze the data is to conduct a forward stepwise regression, which adds the variable that accounts for the most variance first; and if another accounts for additional variance, it is added next. In this analysis, degree predicts R 2 = .75 of the variance (β = 1.35), F(1, 52) = 156.72, p < .001; and number accounts for another R 2 = .10 (β = −0.58), F change(1, 51) = 35.89, p < .001—for a total of R 2 = .85, F(2, 51) = 148.88, p < .001. Association did not account for any additional amount of variance.
2A forward stepwise regression analysis shows that degree predicts R 2 = .27 of the variance (β = 0.91), F(1, 77) = 29.01, p < .001; and number accounts for another R 2 = .04 (β = −0.44), F change(1, 76) = 4.78, p < .03—for a total of R 2 = .32, F(2, 76) = 17.61, p < .001. Association did not account for any additional amount of variance.
3A forward stepwise regression analysis shows that audio-only degree predicts 64% of the variance, F(1, 77) = 138.36, p < .001; and visual-only degree accounts for another 4%, Δ F(1, 76) = 10.77, p = .002. Visual-only association added another 2.2%, Δ F(1, 75) = 5.79, p = .02; and audio-only number added 1.9%, Δ F(1, 74) = 5.27, p = .02. Visual-only number and audio-only association did not account for any additional amount of variance. For the total model, F(4, 74) = 49.67, p < .001.