The effect of aversive learning on visual discrimination
The COVID-pandemic forced us to temporarily shut our labs down and move to online research. This offered an excellent opportunity to establish a web-based paradigm to investigate visual discrimination.
So in this project we try to remotely study the effect of aversive learning on visual discrimination. Given the enhanced visuocortical activity for threat-related compared to safety-related stimuli, which we see for example in EEG and MEG studies (Miskovic & Keil, 2012), we expected that these changes are accompanied by improved visual discrimination for threat-related stimuli. Recently, however, there were two high-quality studies that show mixed results regarding visual discrimination after aversive learning and evidence for improved (Rhodes et al., 2018) and diminished (Shalev et al., 2018) discrimination accuracy has been reported. Therefore, we try to collect new evidence by (1) modifying our fear conditioning paradigms to study threat learning remotely and (2) using a continuous measure of discrimination acuity instead of a standard psychophysical staircase (JND) approach.
Next to investigating visual discrimination of basic visual features (e.g. orientation), I am highly interested if these mechanism also apply for social threat and the discrimination of socially relevant stimuli (e.g. faces).