Abstract

This study investigates the effectiveness of an automatic system for detection of deception by individuals with the use of multiple indicators of such potential deception. Deception detection research in the information systems discipline has postulated increased accuracy through a new class of screening systems that automatically conduct interviews and track multiple indicators of deception simultaneously. Understanding the robustness of this new class of systems and the limitations of its theoretical improved performance is important for refinement of the conceptual design. The design science proof-of-concept study presented here implemented and evaluated the robustness of these systems for automated screening for deception detection. A large experiment was used to evaluate the effectiveness of a constructed multiple-indicator system, both under normal conditions and with the presence of common types of countermeasures (mental and physical). The results shed light on the relative strength and robustness of various types of deception indicators within this new context. The findings further suggest the possibility of increased accuracy through the measurement of multiple indicators if classification algorithms can compensate for human attempts to counter effectiveness.

Acknowledgments

The Department of Homeland Security’s (DHS) National Center for Border Security and Immigration (BORDERS) and the Center for Identification Technology Research (CITeR), a National Science Foundation (NSF) Industry/University Cooperative Research Center (I/UCRC), provided funding for this research. Statements provided herein do not necessarily represent the opinions of the funding organizations. Valuable input for this paper was provided by Jay F. Nunamaker, Judee K. Burgoon, anonymous reviewers, and several graduate students at the Center for the Management of Information (CMI).

Notes

1. The first manipulation check question was “Were you carrying any illicit objects through the checkpoint today?” The second question, “Which of the following people were you asked to deliver the bag to?” was followed by four distinct images of faces, one of which was the same image provided in the instructions as the target recipient of the bag (see “Experiment Task” section). Participants who answered incorrectly to both questions were disqualified.

2. Missing values were imputed using a random forest approach.

3. As an interesting side note, although the best reported ensemble classifiers produced 86 percent overall accuracy, the best prediction performance resulted from a trained logistic regression model, when no countermeasures were used and all indicators were included (90 percent overall, 90 percent sensitivity, 90 percent specificity). As noted, however, whether logistic regression would outperform other classification algorithms in a replication is uncertain.

Additional information

Notes on contributors

Nathan W. Twyman

Nathan W. Twyman (corresponding author; [email protected]) is an assistant professor of business and information technology at Missouri University of Science and Technology. He received his Ph.D. in management information systems from the University of Arizona. He has published in Journal of Management Information Systems, Journal of the Association for Information Systems, and Information and Management. His research interests span human–computer interaction, group systems, and auditing, security, and forensic investigation systems. He has developed and patented a next-generation human risk assessment system design.

Jeffrey Gainer Proudfoot

Jeffrey Gainer Proudfoot is an assistant professor in the Information and Process Management Department at Bentley University. He holds a Ph.D. in management, with an emphasis in management information systems, from the University of Arizona. His work centers on information security and privacy with a focus on automated credibility assessment technologies. His prior research affiliations include the Center for the Management of Information and the National Center for Border Security and Immigration, a Department of Homeland Security Center of Excellence.

Ryan M. Schuetzler

Ryan M. Schuetzler is an assistant professor of information systems and quantitative analysis at the University of Nebraska at Omaha. He holds a Ph.D. from the University of Arizona. His research interests include human–computer interaction with automated agents, deception behavior, privacy, and disclosure. His research has been published in numerous conference proceedings and journals, including Communications of the Association for Information Systems, Journal of Nonverbal Behavior, and Group Decision and Negotiation.

Aaron C. Elkins

Aaron C. Elkins is an assistant professor in the MIS Department at San Diego State University. He holds a Ph.D. in MIS from the University of Arizona. He investigates how the voice, face, body, and language reveal emotion, deception, and cognition for advanced human–computer interaction and artificial intelligence applications. He also investigates how human decision makers are psychologically affected by, use, perceive, and incorporate the next generation of screening technologies into their lives.

Douglas C. Derrick

Douglas C. Derrick is an assistant professor of IT innovation and an associate professor of management, director of the Applied Innovations Lab, co-director of the Commerce and Applied Behavioral Research Lab, and co-director of the Center for Collaboration Science at the University of Nebraska at Omaha. He received his Ph.D. in management information systems from the University of Arizona. His research interests include human–agent interactions, intelligent agents, collaboration technologies, decision support systems, persuasive technology and influence.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.