Abstract
It seems appropriate, if not necessary, to use empirically supported criteria to evaluate reading software applications. This study's purpose was to develop a research-based evaluation framework and review selected beginning reading software that might be used with struggling beginning readers. Thirty-one products were reviewed according to criteria that addressed interface design, instructional design, and beginning reading content. Findings suggested that the software sample generally did not meet the evaluation standards. Results also indicated that software rating highly on interface design tended to rate lower on beginning reading content. Based on these results, implications for practice and next steps for research are also discussed.