“To know that we know what we know, and to know that we do not know what we do not know, that is true knowledge…” (Copernicus)
Don't miss the following research
"Seishin Igaku" (Psychiatry) Magazine, Reliability and Validity of Layered Voice Analysis technology in the detection of mental stress
A growing number of academic papers and research are focusing on the LVA technology in the recent years, covering its unique abilities as well as limitations. For the purpose of proper disclosure, we will provide below research that supports the technology as well those with negative findings (those we believe were conducted in good faith), and our comments to the same. Please note that this page is updated regularly.
We encourage researchers from the relevant fields of science to join us in research, and seek ways in which our emotion detection voice analysis technology can further be used to advance human sciences, motivations, and for the benefit of mankind at large.
Please contact us at: bizdev@nemesysco.com
Recent Research Published
Methylphenidate Mediated Change in Prosody is Specific to the Performance of a Cognitive Task in Female Adult ADHD Patients
FubMed, 06 May, 2015
Recent research, carried out for the past 8 years, led by Dr. Yuval Bloch, head of the cognitive laboratory in “Shalvata Mental Health Care Center” in Israel, confirms the hypothesis that the production of specific voice characteristics would be influenced by Methylphenidate therapy while ADHD patient is performing a cognitive task but not while performing an emotional task. ADHD patients were recorded while performing a cognitive task and an emotional task pre-post using Methylphenidate. The audio recordings were analysed by LVA technology. The voice analysis showed a significant difference in the concentration parameter level pre–post methylphenidate, during the Female’s cognitive task, and proved that Methylphenidate mediated change in prosody is specific to the performance of a cognitive task in female adult ADHD patients.
Speech Analysis in Financial Markets
Foundations and Trends® in Accounting-Vol 7-Issue 2, 01 Mar, 2013
A Recent research by William J. Mayew and Mohan Venkatachalam from Fuqua School of Business , Duke University, North Carolina. The purpose of this review (aka - the research conclusion) is to describe and expand upon research that examines nonverbal communication in financial markets. …it can be useful in capital markets given the ubiquitous availability of other competing information is novel… we foresee tremendous opportunities for researchers to grow this budding area of research. We believe that a deeper understanding of the relevance and usefulness of the vocal component of nonverbal communication will be a useful starting point for exploration of other aspects of nonverbal communication such as facial expressions and gestures.
Vocal Analysis Software for Security Screening: Validity and Deception Detection Potential
Homeland Security Affairs Journal, March 2012
DHS Centers of Excellence Science and Technology Student Papers, Elkins, Aaron C., Judee Burgoon, and Jay Nunamaker.
"This research examines how reliable and valid commercial vocal analysis software is for predicting emotion and deception in security screening contexts using experimental methods. While research exists that evaluates current vocal analysis software’s built-in classifications, there is gap in our understanding on how it may actually perform in a real high stakes environment..."
"...when the vocal variables were analyzed independent of the software’s interface, the variables documented to measure Stress, Cognitive Effort, and Fear significantly differentiated between truth, deception, stressful, and cognitive dissonance induced speech."
Analyzing Speech to Detect Financial Misreporting
Journal of Accounting Research, Volume 50, Issue 2, pages 349–392, May 2012
ABSTRACT: We examine whether vocal markers of cognitive dissonance are useful for detecting financial misreporting. We use speech samples of CEOs during earnings conference calls, and generate vocal dissonance markers using automated vocal emotion analysis software. We begin by assessing construct validity for the software-generated dissonance markers by correlating them with four dissonance-from-misreporting proxies obtained in a laboratory setting. We find a positive association between these proxies and vocal dissonance markers generated by the software, suggesting the software's dissonance markers have construct validity. Applying the software to CEO speech, we find that vocal dissonance markers are positively associated with the likelihood of irregularity restatements. The diagnostic accuracy levels are 11% better than chance and of similar magnitude to models based solely on financial accounting information. Moreover, the association between vocal dissonance markers and irregularity restatements holds even after controlling for financial accounting and linguistic-based predictors. Our results provide new evidence on the role of vocal cues in detecting financial misreporting.
The Power of Voice: Managerial Affective States and Future Firm Performance
The Journal of Finance Volume 67, Issue 1, pages 1–44, February 2012, Duke University - Fuqua school of business
Jessen L. Hobson (College of Business, Illinois), William J. Mayew and Mohan Venkatachalam, Fuqua School of Business, Duke University.
ABSTRACT: We measure managerial affective states during earnings conference calls by analyzing conference call audio files using vocal emotion analysis software. We hypothesize and find that, when managers are scrutinized by analysts during conference calls, positive and negative affects displayed by managers are informative about the firm's financial future. Analysts do not incorporate this information when forecasting near-term earnings. When making stock recommendation changes, however, analysts incorporate positive but not negative affect. This study presents new evidence that managerial vocal cues contain useful information about a firm's fundamentals, incremental to both quantitative earnings information and qualitative “soft” information conveyed by linguistic content
Vocalic Markers of Deception and Cognitive Dissonance for Automated Emotion DetectionSystems
Elkins, Aaron C., Ph.D., THE UNIVERSITY OF ARIZONA, 2011, 184 pages; 3473611
ABSTRACT: This dissertation investigates vocal behavior, measured using standard acoustic and commercial vocal analysis software, as it occurs naturally while lying, experiencing cognitive dissonance, or receiving a security interview conducted by an Embodied Conversational Agent (ECA).
Layered Voice Analysis Based Determination of Personality Traits
Australasian Medical Journal; Aug2010, Vol. 3 Issue 8, p521
Manchireddy, Brinda; Sadaf, Sumaiyah; Kamalesh, Joseph
Research by Mamata Medical College correlating 16PF personality test with LVA emotional readings.
ABSTRACT:
"Introduction Voice opens a door through which emotions fleetly escape analogous to actions manifesting one's personality traits. Voice analysis is the study of speech sounds for purposes other than linguistic content, such as in speech recognition. Layered voice analysis identifies various types of stress levels, cognitive processes, and emotional reactions that are reflected in different properties of the voice. LVA uses a unique mathematical process to detect different types of patterns and abnormalities in the speech flow and classify them in terms of stress, excitement, confusion and other relevant emotional states. Thus the research question:- 'do the outpouring of emotions through one's voice reflect on their personality traits?' ....
... Conclusions: A significant correlation was seen between the emotional factors and certain personality traits. Thus the emotions displayed through voice can be used as a tool to determine personality."
"Voice lie detector" Utilization Research for lie detection - Study by South Korean police polygraph unit
Korean Jurnal of Polygraph, 2010
Comparisson between polygraph and LVA 6.50 results. (Translated from Korean) Polygraph test results, as determined by the polygraph examiners of Seoul Metropolitan Police Agency were classified to one of the following: True Reaction (NDI), False Reaction (DI), Inconclusive (INC). Layered Voice Analyzer (LVA) test results data collected statements from the vocal input facility. LVA Analysis was provided without knowing the true and false test results. The conformity of the test results (between LVA and the polygraph) would include the INC classification, matched 33 (82.5%) cases, and showed a mismatch in 7 (17.5%). When excluding the INC, the analysis matched 33 patients (86.8%), and showed inconsistency in five (13.2%).
Reliability and Validity of Layered Voice Analysis technology in the detection of mental stress
"Seishin Igaku" (Psychiatry) Magazine, October 2008 - ISSN: 0488-1281, ISBN: 05627
Nemoto K1, Tachikawa H1, Takao T1, Sato H1, Ashizawa Y1, Endo G1, Tanaka K1, Ishii R1, Ishii N1, Hashimoto K1, Iguchi T1, Hada S2, Hori M3 and Asada T3
1Psycholosoft, 1-1-1 Tennodai, Tsukuba, Japan; 2Alegria Co., Ltd., Tokyo, Japan; 3Dept. of Psychiatry, Univ. of Tsukuba, Tsukuba, Japan.
"It is known that speech signal contains features which provide information about a human speaker. Although several technologies to detect stress using human voice are available, reports on the reliability and validity of these technologies are controversial. In this study, we investigated the reliability and validity of the Layered Voice Analysis (LVA) technology. Methods: One-hundred and six healthy subjects participated this study. First, stress was assessed by using Speilberger State-Trait Anxiety Inventory (STAI). Blood pressure (BP) was also measured. Then, subjects were randomly assigned to the anagram task group and control group. Before task begins, all of the subjects were asked to answer 10 questions vocally, and they were all recorded. After answering questions, task group underwent anagram task whereas control group just read aloud series of words. After the task, STAI-S and BP were measured again. Answers to each question were analyzed using LVA and 22 parameters were computed. The internal consistency was assessed for each parameter using answers before task. Two-sample t-test was performed to see if parameters change significantly due to anagram task. Results: Of 22 parameters, Cronbach’s alpha of 18 parameters was more than 0.6. Two-sample t-test showed that 10 of 18 parameters along with STAI-S and systolic BP changed significantly during the anagram task. Conclusion: Most of the parameters LVA computed are reliable and the value of these parameters changed significantly under stressful conditions. LVA might be useful in the detection of mental stress."
A Robotic KANSEI Communication System Based on Emotional Synchronization
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Acropolis Convention Center, Nice, France, Sept, 22-26, 2008
Hashimoto, Minoru (Shinshu University), Yamano, Misaki (Shinshu University), Usui, Tatsuya (Shinshu University), 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sept, 22-26, 2008, Acropolis Convention Center, Nice, France.
Research by Shinshu University, Japan, using Nemesysco’s emotion detection technology to improve human-machine communication.