This is a manual on how to do applied psychophysiological research and why. It begins immediately with cardinal rules for establishing credibility when you are preparing a clinical presentation or attending to a clinical presentation. The rest of the book details how to address those rules. An introduction orients the reader to the purpose, needed statistical software, definitions of applied psychophysiology, the rationale of the discipline and a discussion of the scientific method. The content is then presented in five sections covering: A) The need to know what you are doing—from inspiration through protocol development, research ethics and protocol approval process, B) Basic study structures such as research designs appropriate for office or clinical environments, C) Establishing credibility of data and psychophysiological publications, D) Statistics for evaluating and interpreting psychophysiological data, and E) Synthesizing these elements so that write-ups and presentations use appropriate research designs and statistics, provide an adequate basis to secure any needed grants and provide credible evidence to the professional community. Additional sections F through J provide helps in the form of a glossary, sample protocols that exemplify good and bad models, recommendations for further readings and references.
This book would work well for graduate students in applied psychophysiology, as it takes one through a carefully laid out series of steps from the beginning of inspiration through completed investigation and publication. It is an important resource for anyone reading or producing applied psychophysiology research because most training in research methods do not clearly address applied research. The book clarifies what applied psychophysiology is and the relationship of biofeedback and neurofeedback to the general field. It clarifies the need to go beyond reducing type I errors in research. In particular, the book emphasizes the importance of assessing the power of a design to detect a significant finding if it is present. It details how sample size and variability in the metrics affect power. The book also addresses the need in applied research to address effect size. It is not useful to discuss mechanisms of an effect until one has credible evidence of a sufficiently large effect size. The book offers a sobering reflection on those who claim that a particular procedure is more efficient than some other procedure. The research to establish such a claim is expensive and rarely performed, so opinions are offered rather than data.