The Immersive Virtual Reality Assay (IVRA) is a virtual reality paradigm designed to engage three self-regulation targets (self-focused reflection, affect, and cognition) and four corresponding brain circuits in a fixed order: (1) default mode circuit (related to self-focused reflection), (2) the negative affect circuit, (3) the positive affect circuit, and (4) the cognitive control circuit. Participants undergo different virtual environments using the Oculus Rift DK2 (virtual reality headset developed and manufactured by Oculus VR). The self-regulation targets listed above are assessed behaviorally and via self-report. This paradigm is intended to assess a participant’s regulation of cognition with the use of 3 optional virtual reality environments. The first virtual game is VROG (Illusion Walk, 2014), which is designed to assess cognitive control (relating to the cognitive control circuit: regions in the dorsolateral prefrontal cortex, anterior cingulate cortex, dorsal parietal cortex, and posterior cingulate gyrus). Participants pose as “frogs” in this virtual reality environment and are instructed to ‘eat’ non-wasp bugs but avoid ‘eating’ wasps. They are instructed to complete this task as quickly and accurately as possible. Experimenters record scores for non-wasp bugs eaten, which contribute to a higher score, and wasps eaten, which contribute to a lower score. A second option for a cognitive control environment uses Crystal Rift (Psytec Games, 2016), which is displayed as a maze-like dungeon. Participants are told to run down several hallways as quickly as possible while trying to avoid open trap doors. These trap doors will remain open for a few seconds, during which the participant has to pause and wait (the wait-time will increase by the end of the game to provoke errors). By scoring the frequency of premature advancement over open trap doors, researchers can assess for automatic response inhibition. A third option for a cognitive control environment is based on Fruit Ninja VR (Halfbrick Studios, 2016). Participants are placed in a virtual environment in which they are instructed to use their controllers as swords to slice through various fruits that come up from the ground and to avoid bombs— which are visually distinct but spatially similar to fruits. If a fruit hits the ground before being sliced, it is considered a “strike.” The round ends after three “strikes” or if a bomb is accidentally sliced. Strikes, bombs sliced, and score should be recorded per round. Supplementary to the behavioral score metrics listed above for each virtual environment, participants’ behavior is videotaped and then coded via FRAPS screen capture software on the virtual reality headset (FRAPS, 2016). The headset’s coordinates in space relative to the origin as well as its three-dimensional movements (i.e., pitch, yaw, and roll) are also captured over the duration of each environment. For self-report measures, participants are prompted twice throughout each virtual environment by the screen instruction “How strong? 0-8” to rate the strength of their emotions. Participants state ratings aloud for experimenter to manually record. At the end of each virtual environment experience, participants are given a self-report form to report the intensity of their emotions across the following scales: amusement, anger, confusion, contempt, disgust, fear, happiness, pain, relief, sadness, tension, relaxation, nausea/discomfort, and strength of immersion. Additionally, participants’ motion sickness is also monitored via self-report.
The Immersive Virtual Reality Assay (IVRA) was created to engage self-regulation targets to elucidate mechanisms of change implicated in depression and obesity. Deficient self-regulation ability is associated with an array of maladaptive behaviors and consequences (e.g., overeating, inactivity, and psychopathology). Alternatively, improved weight management, mental health, and social interactions are outcomes of those who display more aptitude at regulating their cognitions (Gross & John, 2003; Tangney, Baumeister, & Boone, 2004; Gillison et al., 2015). Therefore, cognitive regulation (a target of self-regulation) may be a possible mechanism for changing unhealthy behaviors.
[+] PMCID, PUBMED ID, or CITATION
Text Citation: Gillison, F., Stathi, A., Reddy, P., Perry, R., Taylor, G., Bennett, P., Dunbar, J., Greaves, C. (2015). Processes of behavior change and weight loss in a theory-based weight loss intervention program: a test of the process model for lifestyle behavior change. International Journal of Behavioral Nutrition and Physical Activity, 12(1), 2.
Text Citation: Gross, J.J., John, O.P. (2003). Individual differences in two emotion regulation processes: implications for affect, relationships, and well-being. Journal of personality and social psychology, 85(2), 348.
Text Citation: Tangney, J.P., Baumeister, R.F., Boone, A.L. (2004). High self‐control predicts good adjustment, less pathology, better grades, and interpersonal success. Journal of personality, 72(2), 271-324.
This measure has not been measured yet.
This measure has not been influenced yet.
This measure has not been validated yet.
The Science of Behavior Change (SOBC) program seeks to promote basic research on the initiation, personalization and maintenance of behavior change. By integrating work across disciplines, this effort will lead to an improved understanding of the underlying principles of behavior change. The SOBC program aims to implement a mechanisms-focused, experimental medicine approach to behavior change research and to develop the tools required to implement such an approach. The experimental medicine approach involves: identifying an intervention target, developing measures to permit verification of the target, engaging the target through experimentation or intervention, and testing the degree to which target engagement produces the desired behavior change.
Within the SOBC Measures Repository, researchers have access to measures of mechanistic targets that have been (or are in the processing of being) validated by SOBC Research Network Members and other experts in the field. The SOBC Validation Process includes three important stages of evaluation for each proposed measure: Identification, Measurement, and Influence.
The first stage of validation requires a measure to be Identified within the field; there must be theoretical support for the specific measure of the proposed mechanistic target or potential mechanism of behavior change. This evidence may include references for the proposed measure, or theoretical support for the construct that the proposed measure is intended to assess. The second stage of validation requires demonstration that the level and change in level of the chosen mechanistic target can be Measured with the proposed measure (assay). For example, if the proposed measure is a questionnaire, the score on the measure should indicate the activity of the target process, and it must have strong psychometric properties. The third stage of validation requires demonstration that the measure can be Influenced; there must be evidence that the measured target is malleable and responsive to manipulation. Evidence relating to each stage includes at least one peer-reviewed publication or original data presentation (if no peer-reviewed research is available to support the claim) and is evaluated by SOBC Research Network Members and experts in the field.
Once a measure has gone through these three stages, it will then either be Validated or Not validated according to SOBC Research Network standards. If a measure is Validated, then change in the measured target was reliably associated with Behavior Change. If a measure is Not validated, then change in the measured target was not reliably associated with Behavior Change. Why would we share measures that are not validated? The SOBC Research Network values open, rigorous, and transparent research. Our goal is to make meaningful progress and develop replicable and effective interventions in behavior change science. Therefore, the SOBC sees value in providing other researchers in the field with information regarding measures that work and measures that fall short for specific targets. Further, a measure that is not validated for one target in one population may be validated in another target or population.
Want to learn more? For any questions regarding the SOBC Validation Process or Measures Repository, please email email@example.com.
Has the mechanism been identified as a potential target for behavior change? This section summarizes theoretical support for the mechanism.
Have the psychometric properties of this measure been assessed? This section includes information such as content validity, internal consistency, and test-retest reliability.
Has a study manipulation led to change in the mechanism? This section addresses evidence that this measure is modifiable by experimental manipulation or clinical intervention.
Has a change in this mechanism been associated with behavior change? This section addresses empirical evidence that causing change in the measure reliably produces subsequent behavior change.