2024-03-29T10:10:48Z
http://www.lesli-journal.org/ojs/index.php/lesli/oai
oai:ojs.lesli-journal.org:article/6
2017-05-05T12:52:19Z
lesli:RART
"131206 2013 eng "
2327-5596
dc
Detection of Deception in a Virtual World
Collister, Lauren B.
University of Pittsburgh
Institute for Linguistic Evidence http://www.pitt.edu/~lbc8 http://orcid.org/0000-0001-5767-8486
Array
This work explores the role of multimodal cues in detection of deception in a virtual world, an online community of World of Warcraft players. Case studies from a five-year ethnography are presented in three categories: small-scale deception in text, deception by avoidance, and large-scale deception in game-external modes. Each case study is analyzed in terms of how the affordances of the medium enabled or hampered deception as well as how the members of the community ultimately detected the deception. The ramifications of deception on the community are discussed, as well as the need for researchers to have a deep community knowledge when attempting to understand the role of deception in a complex society. Finally, recommendations are given for assessment of behavior in virtual worlds and the unique considerations that investigators must give to the rules and procedures of online communities.
e-journals@mail.pitt.edu
2013-12-06 12:49:29
Peer-reviwed Article
application/pdf
http://www.lesli-journal.org/ojs/index.php/lesli/article/view/6
Linguistic Evidence in Security, Law and Intelligence; Vol 1, No 1 (2013)
eng
Copyright (c)
oai:ojs.lesli-journal.org:article/12
2017-05-05T12:52:25Z
lesli:RART
"141219 2014 eng "
2327-5596
dc
Prosody and its application to forensic linguistics
Harris, Michael J.
University of California, Santa Barbara http://www.michaelharris-linguistics.com/
Gries, Stefan Th.
University of California, Santa Barbara http://www.linguistics.ucsb.edu/faculty/stgries/
Miglio, Viola G.
University of California, Santa Barbara http://violagmiglio.net/
Array
This article describes three studies in prosody and their potential application to the field of forensic linguistics. It begins with a brief introduction to prosody. It then proceeds to describe Miglio, Gries, & Harris (2014), a comparison of prosodic coding of new information by bilingual Spanish-English speakers and monolingual Spanish speakers. A description of Harris & Gries (2011) follows. This study compares the vowel duration variability of bilingual Spanish-English speakers and monolingual Spanish speakers, and touches upon corpus-based frequency effects and differences in linguistic aptitude between the two speaker groups. Finally, a portion of anongoing study is described (Harris in preparation). This section describes the use of prosodic variables and ensemble methods (or methods that use multiple learning algorithms) to classify languages, even in the case of impoverished data. All three experiments have implications and applications to the field of forensic linguistics, which are touched upon in each respective section and discussed in a more in-depth manner in the final section of this article. Furthermore, the applications of these methods to forensic linguistics are discussed in light of best practices for forensic linguistics, as outlined in Chaski (2013).
e-journals@mail.pitt.edu
2014-12-19 15:44:42
Peer-reviwed Article
application/pdf
http://www.lesli-journal.org/ojs/index.php/lesli/article/view/12
Linguistic Evidence in Security, Law and Intelligence; Vol 2, No 2 (2014)
eng
Copyright (c) 2014
oai:ojs.lesli-journal.org:article/1
2017-05-05T12:52:12Z
lesli:RART
"131206 2013 eng "
2327-5596
dc
Extending Textual Models of Deception to Interrogation Settings
Skillicorn, David
Queen's University, Canada http://www.cs.queensu.ca/home/skill
Lamb, Carolyn
Queen's University
Array
Models that detect deception in text typically outperform humans but are limited to single pieces of text created by a single individual. Text from dialogues and wider conversations reflects linguistic influence among the participants, and this intertwining makes it difficult to ascribe deception to any one of them. We address this problem in dialogues, particularly interrogations. by seeking to detect and remove the influence of the language of a question from the language of the response. Surprisingly, this does not work as expected: the response by a deceptive person to certain categories of words in questions is qualitatively different from that of a truthful person. Successful prediction of deception in responses, therefore, requires analysis using the words of both questions and answers. We show that such prediction is indeed effective.
e-journals@mail.pitt.edu
2013-12-06 12:49:29
Peer-reviwed Article
application/pdf
http://www.lesli-journal.org/ojs/index.php/lesli/article/view/1
Linguistic Evidence in Security, Law and Intelligence; Vol 1, No 1 (2013)
eng
Copyright (c)
oai:ojs.lesli-journal.org:article/13
2017-05-05T12:52:26Z
lesli:RART
"141219 2014 eng "
2327-5596
dc
Detecting Deception by Analyzing Written Statements in Korean
Kang, Seung-Man
Chungbuk National University
Lee, Hyoungkeun
Korea Police Investigation Academy
Array
This paper delves into the effect of SCAN and its cross-linguistic applicability by analyzing written statements in Korean. For this research, we conducted an experiment in which truth tellers were asked to write a true statement about a staged event and liars a fabricated one about the same event. We analyzed these two types of written statements using the criteria of SCAN. The results (accuracy rate, 81.6%) indicate that SCAN is effective in detecting deception despite the low internal consistency level among coders (Cronbach’s alpha level, 0.577). It was also shown that the SCAN criteria are not universally applicable across languages as the mode of using pronouns in Korean yields no significant difference between truthful and deceptive statements.
e-journals@mail.pitt.edu
2014-12-19 15:44:42
Peer-reviwed Article
application/pdf
http://www.lesli-journal.org/ojs/index.php/lesli/article/view/13
Linguistic Evidence in Security, Law and Intelligence; Vol 2, No 2 (2014)
eng
Copyright (c) 2014 Linguistic Evidence in Security, Law and Intelligence
oai:ojs.lesli-journal.org:article/2
2017-05-05T12:52:13Z
lesli:RART
"131206 2013 eng "
2327-5596
dc
Analysing Deception in Written Witness Statements
Picornell, Isabel
QED Limited http://www.qed.info/index.html
Array
Written witness statements are a unique source for the study of high-stakes textual deception. To date, however, there is no distinction in the way that they and other forms of verbal deception have been analysed, with written statements treated as extensions of transcribed versions of oral reports. Given the highly context-dependent nature of cues, it makes sense to take the characteristics of the medium into account when analysing for deceptive language. This study examines the characteristic features of witness narratives and proposes a new approach to search for deception cues. Narratives are treated as a progression of episodes over time, and deception as a progression of acts over time. This allows for the profiling of linguistic bundles in sequence, revealing the statements’ internal gradient, and deceivers’ choice of deceptive linguistic strategy. Study results suggest that, at least in the context of written witness statements, the weighting of individual features as deception cues is not static but depends on their interaction with other cues, and that detecting deceivers’ use of linguistic strategy is en effective vehicle for identifying deception.
e-journals@mail.pitt.edu
2013-12-06 12:49:29
Peer-reviwed Article
application/pdf
http://www.lesli-journal.org/ojs/index.php/lesli/article/view/2
Linguistic Evidence in Security, Law and Intelligence; Vol 1, No 1 (2013)
eng
Copyright (c)
oai:ojs.lesli-journal.org:article/19
2019-09-13T12:00:13Z
lesli:RART
"190913 2019 eng "
2327-5596
dc
Developing and Analyzing a Spanish Corpus for Forensic Purposes
Almela, Ángela
Universidad de Murcia
Alcaraz-Mármol, Gema
Universidad de Castilla-La Mancha
García-Pinar, Arancha
University Center for Defense-UPCT
Pallejá, Clara
University Center for Defense-UPCT
Array
In this paper, the methods for developing a database of Spanish writing that can be used for forensic linguistic research are presented, including our data collection procedures. Specifically, the main instrument used for data collection has been translated into Spanish and adapted from Chaski (2001). It consists of ten tasks, by means of which the subjects are asked to write formal and informal texts about different topics. To date, 93 undergraduates from Spanish universities have already participated in the study and prisoners convicted of gender-based abuse have participated. A twofold analysis has been performed, since the data collected have been approached from a semantic and a morphosyntactic perspective. Regarding the semantic analysis, psycholinguistic categories have been used, many of them taken from the LIWC dictionary (Pennebaker et al., 2001). In order to obtain a more comprehensive depiction of the linguistic data, some other ad-hoc categories have been created, based on the corpus itself, using a double-check method for their validation so as to ensure inter-rater reliability. Furthermore, as regards morphosyntactic analysis, the natural language processing tool ALIAS TATTLER is being developed for Spanish. Results shows that is it possible to differentiate non-abusers from abusers with strong accuracy based on linguistic features.
e-journals@mail.pitt.edu
2019-09-13 08:00:13
Peer-reviwed Article
application/pdf
http://www.lesli-journal.org/ojs/index.php/lesli/article/view/19
Linguistic Evidence in Security, Law and Intelligence; Vol 3 (2019)
eng
Copyright (c) 2019 Linguistic Evidence in Security, Law and Intelligence
oai:ojs.lesli-journal.org:article/4
2017-05-05T12:52:16Z
lesli:RART
"131206 2013 eng "
2327-5596
dc
False Confessions and the Use of Incriminating Evidence
Cole, Tim
DePaul University http://communication.depaul.edu/faculty-and-staff/faculty/Pages/cole.aspx
Teboul, JC Bruno
DePaul University
Zulawski, David E
Wicklander-Zulawski & Associates http://www.w-z.com
Wicklander, Douglas E
Wicklander-Zulawski Associates http://www.w-z.com
Sturman, Shane G
Wicklander-Zulawski Associates http://www.w-z.com
Array
To date, few experimental studies have looked at the factors that influence people’s willingness to confess to something they did not do. One widely cited experiment on the topic (i.e., Kassin & Kiechel, 1996) has suggested that false confessions are easy to obtain and that the use of false incriminating evidence increases the likelihood of obtaining one. The present research attempted to replicate Kassin and Kiechel’s (1996) work using a different experimental task. In the present experiment, unlike Kassin and Kiechel’s (1996) study, the participants were completely certain that they were not responsible for what had happened, thereby providing a different context for testing the idea that false incriminating evidence increases the likelihood of obtaining a false confession. The results are discussed with respect to factors that may or may not increase individuals’ willingness to offer a false admission of guilt.
e-journals@mail.pitt.edu
2013-12-06 12:49:29
Peer-reviwed Article
application/pdf
http://www.lesli-journal.org/ojs/index.php/lesli/article/view/4
Linguistic Evidence in Security, Law and Intelligence; Vol 1, No 1 (2013)
eng
Copyright (c)
oai:ojs.lesli-journal.org:article/20
2019-09-13T12:00:13Z
lesli:RART
"190913 2019 eng "
2327-5596
dc
Benchmarking Author Recognition Systems for Forensic Application
van Halteren, Hans
Centre for Language Studies, Radboud University, Nijmegen
Array
This paper demonstrates how an author recognition system could be benchmarked, as a prerequisite for admission in court. The system used in the demonstration is the FEDERALES system, and the experimental data used were taken from the British National Corpus. The system was given several tasks, namely attributing a text sample to a specific text, verifying that a text sample was taken from a specific text, and verifying that a text sample was produced by a specific author. For the former two tasks, 1,099 texts with at least 10,000 words were used; for the latter 1,366 texts with known authors, which were verified against models for the 28 known authors for whom there were three or more texts. The experimental tasks were performed with different sampling methods (sequential samples or samples of concatenated random sentences), different sample sizes (1,000, 500, 250 or 125 words), varying amounts of training material (between 2 and 20 samples) and varying amounts of test material (1 or 3 samples). Under the best conditions, the system performed very well: with 7 training and 3 test samples of 1,000 words of randomly selected sentences, text attribution had an equal error rate of 0.06% and text verification an equal error rate of 1.3%; with 20 training and 3 test samples of 1,000 words of randomly selected sentences, author verification had an equal error rate of 7.5%. Under the worst conditions, with 2 training and 1 test sample of 125 words of sequential text, equal error rates for text attribution and text verification were 26.6% and 42.2%, and author verification did not perform better than chance. Furthermore, the quality degradation curves with slowly worsening conditions were not smooth, but contained steep drops. All in all, the results show the importance of having a benchmark which is as similar as possible to the actual court material for which the system is to be used, since the measured system quality differed greatly between evaluation scenarios and system degradation could not be predicted easily on the basis of the chosen scenario parameters.
e-journals@mail.pitt.edu
2019-09-13 08:00:13
Peer-reviwed Article
application/pdf
http://www.lesli-journal.org/ojs/index.php/lesli/article/view/20
Linguistic Evidence in Security, Law and Intelligence; Vol 3 (2019)
eng
Copyright (c) 2019 Linguistic Evidence in Security, Law and Intelligence
oai:ojs.lesli-journal.org:article/5
2017-05-05T12:52:18Z
lesli:RART
"131206 2013 eng "
2327-5596
dc
Seeing through Deception: A Computational Approach to Deceit Detection in Spanish Written Communication
Almela, Ángela
Universidad Católica San Antonio de Murcia https://ucam.academia.edu/AngelaAlmela
Valencia-García, Rafael
Universidad de Murcia
Cantos, Pascual
Universidad de Murcia
Array
The present paper addresses the question of the nature of deception language. Specifically, the main aim of this piece of research is the exploration of deceit in Spanish written communication. We have designed an automatic classifier based on Support Vector Machines (SVM) for the identification of deception in an ad hoc opinion corpus. In order to test the effectiveness of the LIWC2001 categories in Spanish, we have drawn a comparison with a Bag-of-Words (BoW) model. The results indicate that the classification of the texts is more successful by means of our initial set of variables than with the latter system. These findings are potentially applicable to areas such as forensic linguistics and opinion mining, where extensive research on languages other than English is needed.
e-journals@mail.pitt.edu
2013-12-06 12:49:29
Peer-reviwed Article
application/pdf
http://www.lesli-journal.org/ojs/index.php/lesli/article/view/5
Linguistic Evidence in Security, Law and Intelligence; Vol 1, No 1 (2013)
eng
Copyright (c)