Abstract
Content extraction systems can automatically extract entities and relations from raw text and use the information to populate knowledge bases, potentially eliminating the need for manual data discovery and entry. Unfortunately, content extraction is not sufficiently accurate for end users who require high trust in the information uploaded to their databases, creating a need for human validation and correction of extracted content. In this article the potential influence of content extraction errors on a prototype semiautomated system that will allow a human reviewer to correct and validate extracted information before uploading it was examined, focusing on the identification and correction of precision errors. Content extraction was applied to 6 different corpora, and a Goals, Operators, Methods, and Selection rules Language (GOMSL) model was used to simulate the activities of a human using the prototype system to review extraction results, correct precision errors, ignore spurious instances, and validate information. The simulated task completion rate of the semiautomated system model was compared with that of a second GOMSL model that simulates the steps required for finding and entering information manually. Results quantify the efficiency advantage of the semiautomated workflow—estimated to be roughly 1.5 to 2 times more efficient than a manual workflow—and illustrate the value of employing multidisciplinary quantitative methods to calculate system-level measures of technology utility.
Acknowledgments
The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof, or the MITRE Corporation. We thank our sponsors, system developers, and subject matter experts for their support of this work. We also thank reviewers who provided comments on an earlier version of this article.