The “cold” version of the Columbia Card Task (CCT) is a behavioral measure of decision making. (For comparison, see also the “hot” version of the CCT in the Measures Repository.) The task is constructed such that the optimal decision making strategies are the same for the cold CCT as the hot CCT, but the procedures vary either to engage (hot version) or not engage (cold version) affective processes. In the cold CCT participants are presented with a display of 32 cards arranged face down in a grid made up of four rows and eight columns. On each trial they are asked to indicate how many cards they would like to turn over. They are instructed that they may turn over as many cards as they wish for a given display with the goal of maximizing their earnings. Each gain card turned over adds to their total earnings. Each loss card turned over subtracts from their earnings and also immediately terminates the trial. A higher number of cards turned over is associated with a higher total amount won as long as no loss card is turned over. This principle incentivizes the decision to turn over a higher number of cards. However, a higher number of cards turned over is also associated with a higher probability of encountering a loss. This principle incentivizes the decision to turn over a lower number of cards. Therefore, to maximize their earnings, participants must properly weigh the probability of a loss, the gain amount, and the loss amount as they make the decision about how many cards to turn over on each trial. The task uses a 3 x 3 x 3 factorial design involving probability of loss (1, 2, or 3 loss cards per display), gain amount (10, 20, or 30 points), and loss amount (250, 500, or 750 points) with two trials per cell of the design, resulting in a total of 54 trials. The dependent measure is the average number of cards turned over in the task. Those three factors may be analyzed to determine whether participants make use of one, two, or all three of them to reach their decisions. Ultimately, a higher average number of cards turned over reflects increased risk taking.
The “cold” version of the Columbia Card Task (CCT) measures deliberative decision making processes. This interpretation of the measure is supported by findings of lower self-reported “gut-level” decision making and higher “mathematically” achieved decision making for the cold vs. hot versions of the CCT (Figner, MacKinlay, Wilkening, & Weber, 2009). Deliberative decision making is a potentially important aspect of self-regulation that is believed to be separable from affective decision making processes, and the distinction of these two constructs has important implications for risky decision making and unhealthy behaviors during human development (e.g., adolescence; Casey, Jones, & Hare, 2008).
[+] PMCID, PUBMED ID, or CITATION
Text Citation: Casey, B. J., Jones, R. M., & Hare, T. A. (2008). The adolescent brain. Annals of the New York Academy of Sciences, 1124(1), 111-126.
Text Citation: Figner, B., Mackinlay, R. J., Wilkening, F., & Weber, E. U. (2009). Affective and deliberative processes in risky choice: age differences in risk taking in the Columbia Card Task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(3), 709-730.
This measure has not been measured yet.
This measure has not been influenced yet.
This measure has not been validated yet.
The Science of Behavior Change (SOBC) program seeks to promote basic research on the initiation, personalization and maintenance of behavior change. By integrating work across disciplines, this effort will lead to an improved understanding of the underlying principles of behavior change. The SOBC program aims to implement a mechanisms-focused, experimental medicine approach to behavior change research and to develop the tools required to implement such an approach. The experimental medicine approach involves: identifying an intervention target, developing measures to permit verification of the target, engaging the target through experimentation or intervention, and testing the degree to which target engagement produces the desired behavior change.
Within the SOBC Measures Repository, researchers have access to measures of mechanistic targets that have been (or are in the processing of being) validated by SOBC Research Network Members and other experts in the field. The SOBC Validation Process includes three important stages of evaluation for each proposed measure: Identification, Measurement, and Influence.
The first stage of validation requires a measure to be Identified within the field; there must be theoretical support for the specific measure of the proposed mechanistic target or potential mechanism of behavior change. This evidence may include references for the proposed measure, or theoretical support for the construct that the proposed measure is intended to assess. The second stage of validation requires demonstration that the level and change in level of the chosen mechanistic target can be Measured with the proposed measure (assay). For example, if the proposed measure is a questionnaire, the score on the measure should indicate the activity of the target process, and it must have strong psychometric properties. The third stage of validation requires demonstration that the measure can be Influenced; there must be evidence that the measured target is malleable and responsive to manipulation. Evidence relating to each stage includes at least one peer-reviewed publication or original data presentation (if no peer-reviewed research is available to support the claim) and is evaluated by SOBC Research Network Members and experts in the field.
Once a measure has gone through these three stages, it will then either be Validated or Not validated according to SOBC Research Network standards. If a measure is Validated, then change in the measured target was reliably associated with Behavior Change. If a measure is Not validated, then change in the measured target was not reliably associated with Behavior Change. Why would we share measures that are not validated? The SOBC Research Network values open, rigorous, and transparent research. Our goal is to make meaningful progress and develop replicable and effective interventions in behavior change science. Therefore, the SOBC sees value in providing other researchers in the field with information regarding measures that work and measures that fall short for specific targets. Further, a measure that is not validated for one target in one population may be validated in another target or population.
Want to learn more? For any questions regarding the SOBC Validation Process or Measures Repository, please email email@example.com.
Has the mechanism been identified as a potential target for behavior change? This section summarizes theoretical support for the mechanism.
Have the psychometric properties of this measure been assessed? This section includes information such as content validity, internal consistency, and test-retest reliability.
Has a study manipulation led to change in the mechanism? This section addresses evidence that this measure is modifiable by experimental manipulation or clinical intervention.
Has a change in this mechanism been associated with behavior change? This section addresses empirical evidence that causing change in the measure reliably produces subsequent behavior change.