The sequential model predicts that participants update their beliefs partly on the basis of agreement between the subject and agent, and partly on the basis of the agent’s correctness, but it does not allow for an interaction between the two. In a post hoc effort to directly relate these two approaches, we constructed an additional reinforcement-learning algorithm that allows for differential updating on AC, DC, AI, and DI trial types
for people and algorithms (see Supplemental Information Selleckchem CP-673451 for details). Due to the large number of parameters, this model was not identifiable in individual subjects but could be identified for the group using a fixed effects analysis. We computed maximum likelihood estimates (MLEs) on the eight relevant learning rates: people, γp on AC trials, ηp on DC trials, φp on AI trials, and λp on DI trials; algorithms, γa on AC trials, ηa on DC trials, φa on AI trials, and λa on DI trials. As shown in Figure S4A, this analysis revealed a greater MLE for γp than for γa, the learning rate constants on AC trials, but a smaller MLE for φp than φa, the learning rate constants on AI trials. selleck products The differences between MLEs on DC and DI trials were notably smaller. These results are consistent with the regression results, in that the group
of subjects updated their ability estimates more for people than algorithms following correct predictions with which they agreed but less for people than algorithms following incorrect predictions with which they disagreed. We began the analysis of the fMRI data by searching for expected value (EV) signals at choice, and rPE signals at feedback. On the basis of previous findings, we predicted to find EV signals in ventromedial prefrontal cortex (vmPFC) at the time subjects made decisions
and rPEs in striatum at the time of outcome (Boorman et al., 2009, FitzGerald et al., 2009, Klein-Flügge because et al., 2011, Li and Daw, 2011, Lim et al., 2011, O’Doherty et al., 2004 and Tanaka et al., 2004). At the time of decision, EVs are high when subjects believe that the agent will bet correctly or incorrectly with high probability because they can forecast their behavior confidently and low when they believe that the agent’s ability is close to 0.5 because they cannot. We estimated subjects’ trial-by-trial reward expectation and rPEs across all conditions using the sequential model and regressed these against the BOLD response across the whole brain. These contrasts revealed positive effects of the EV of the chosen option in vmPFC at choice and rPE at feedback in both ventral and dorsal striatum, among other regions (Figure 3; chosen value, Z > 3.1 and p < 0.001, voxel-wise thresholding; rPE, Z = 3.1 and p = 0.0l, corrected for multiple comparisons with cluster-based thresholding; Table S2).