Moderation and Mediation

We are always on the lookout for good material on statistics and wanted to share this video from Andy Field, author of Discovering Statistics Using IBM SPSS Statistics (4th ed.), on moderation and mediation. Not only does he provide a clear explanation of the difference but he demonstrates how to do basic tests of moderation and mediation using Hayes’ PROCESS macro for SPSS.

The BPAQ-SF: A Brief Measure of Trait Aggression

Photo of a mask with an angry expression
The 29-item Aggression Questionnaire (AQ; Buss & Perry, 1992) is one of the most popular self-report measures of trait aggression. It yields four useful factors (i.e., Physical Aggression, Verbal Aggression, Anger, and Hostility); however, the four-factor structure of the AQ has not always been confirmed, raising questions about the structure of the measure.

Bryant and Smith (2001) developed a 12-item short form of the AQ that retains the four-factor structure and appears to have some psychometric advantages over the original, including improved model fit. This version, referred to as the Buss-Perry Aggression Questionnaire - Short Form (BPAQ-SF) in the literature, provides researchers interested in studying aggression with a more efficient alternative to the AQ.

In addition to Bryant and Smith's (2001) work testing the BPAQ-SF in multiple data sets, Kalmoe (2015) found support for the BPAQ-SF in nationally representative U.S. and college student samples. Slightly modified versions of the BPAQ-SF have also been used with mentally ill male offenders (Diamond, Wang, & Buffington-Vollum, 2005) and federal offenders (Diamond & Magaletta, 2006).

Thus, the BPAQ-SF provides researchers wanting to measure trait aggression with a relatively brief but psychometrically sound option.

Careless Responding in Online Survey Research

Online survey icon
Much of our recent research on relational aggression has utilized college student samples and has involved online surveys. Based on published recommendations (e.g., Huang, Curran, Keeney, Poposki, & DeShon, 2011; Liu, Bowling, Huang, & Kent, 2013; Meade & Craig, 2012), we have been incorporating various methods of detecting careless responding in our surveys. What we have found is that a substantial number of research participants are responding carelessly. In the interest of data integrity, it is clear that the use of procedures to detect careless responders are essential to include in online survey research.

For those researchers just beginning to consider incorporating methods for identifying careless responders and reducing careless responding in online survey research, some of the procedures we have been using include:
  • Modifying consent forms and survey instructions to inform potential participants that quality assurance checks are being used and that failing such checks will result in them not receiving incentives for participation
  • Including validity items or bogus items that should be answered the same way by participants who are attending to item content
  • Measuring survey completion and/or individual instrument completion time
The use of these procedures has allowed us to make sure that participants who are responding carelessly do not receive incentives for participation (e.g., research credit) and that we can easily identify and remove their data.

We have noticed that it is becoming increasingly common for authors of studies using online surveys to address how they detected careless responders and what they did with these data. This suggests that the use of such procedures are rapidly becoming part of routine practice to promote data integrity.