STUDY 1 Ocular Exploration of Emotional Facial Expression: High vs Low Neuroticism Volunteers and design The ethics committee Oxfordshire REC B (Ref. 09/H0605/60) approved the study. All participants gave written consent. Fifty subjects (18 males; mean age =28.93±7.7 years) were recruited from the general population, based on neuroticism scores on the Eysenck Personality Questionnaire (EPQ) (Eysenck and Eysenck, 1975) either low (⩽6) or high (⩾16), approximately corresponding to the mean neuroticism scores of the same aged general population (Eysenck et al, 1985) +1 SD, which conveys an 50–60% increase in lifetime risk for developing depression (Kendler et al, 1993). Participants were screened for axis I disorders using the Structured Clinical Interview for DSM-IV (First et al, 1996) and completed the State-Trait Anxiety Inventory (Spielberger et al, 1983), Beck Depression Inventory (Beck et al, 1961), Dysfunctional Attitude Scale (Weissman and Beck, 1978), and visual analogue scales (VAS) rating happiness, sadness, hostility, alertness, anxiety, and calmness. Three subjects were excluded owing to history of depression (before data collection, all from the high neuroticism group) and three owing to bad quality of data recording (two from the high and one from the low neuroticism group). The final sample comprised 44 subjects (17 males, mean age=29.6±7.9 years), of which 24 had low neuroticism scores (Low Ns) (10 males, mean N=2.62±1.4) and 20 had high neuroticism scores (High Ns) (7 males, mean N=18.45±2.2). Gender imbalance in the High Ns group reflects the personality trait distribution in the general population (Lynn and Martin, 1997). Gender discrimination task A task previously adopted in fMRI experiments with a similar sample was used (Chan et al, 2009; Di Simplicio et al, 2013). Participants were presented with 8 faces on a computer displaying prototypical expressions of fear and happiness, and a neutral face (29) morphed at 30% (low), 60% (medium), and 100% (high intensity) of the expression along the neutral-prototypic continuum. A total 168 stimuli were presented in a random order for 500 ms each, with intertrial interval varying according to a Poisson distribution (mean=5000 ms). Participants indicated the gender of each face by a key press. Behavioral responses (accuracy and RT) were recorded using Presentation (Neurobehavioral Systems, Albany, CA). Eye-movement recording During the task, eye movements were recorded using an infrared eye-tracking system (ISCAN, Woburn, MA) at a rate of 60 Hz. A chin rest ensured a stable position of the head at 100-cm screen distance. Eye position was calibrated using a five-point calibration procedure. Data were analyzed using the i-LAB software (Gitelman, 2002) under the Matlab environment (Mathworks, Inc.). For each trial, blinks were filtered using the pupil-size parameter, and missing points were replaced using a linear interpolation between preblink and postblink values. To derive eye-movement measures, three rectangular regions of interest (ROIs) were drawn covering: (1) the eye region, (2) the mouth and lower nose region, and (3) the whole face. Parameters were computed as the proportion of ocular movements made over eyes/mouth ROI within the total ocular movements made over the whole face. Four dependent measures were extracted for analysis: number of fixations, scanpath length, scanning time, and gaze maintenance. Fixations represent the frequency of stationary gaze points during scanning and were defined as a set of eye positions within a defined area (0.30 × 0.30 degree area) for a selected amount of time (100 ms) and calculated using a velocity/distance algorithm (Gitelman, 2002). Scanpath length refers to the summed distances traveled by the eye during scanning, measured in degrees of visual angle (expressed in pixels). Scanning time refers to the time spent by the eye traveling over the stimuli, calculated in ms. The gaze maintenance calculation indicated whether gaze was or not maintained in the selected ROI for the entire trial duration (500 ms). For each outcome measure, a cutoff of minimum 25% valid trials was adopted in order for data from any individual to be included. Statistical analysis Box plots were used to identify outliers. Values >3 box-lengths from the 75th percentile (box-length=interquartile range) were excluded. Behavioral data (gender discrimination mean accuracy and mean RT) were analyzed using repeated-measures ANOVAs, with group (High vs Low Ns) as the between-subject variable and emotion (fearful, happy) and intensity of facial expression (high, medium, low) as the within-subjects variables. The eye-movement parameters were analyzed using repeated-measures ANOVAs, first over the whole face ROI with group (High vs Low Ns) as the between-subject variable and emotion (fearful, happy) and intensity of facial expression (high, medium, low) as the within-subjects variables. Ocular exploration over different face areas was analyzed, including the additional within-subjects variable of within-face ROIs (eyes, mouth) to the repeated-measures ANOVAs. The interpretation of significant interaction effects was aided by the use of post hoc analysis (Bonferroni) to compare between levels of the different factors and by use of simple main-effect analyses. Independent samples t-tests were performed to examine group differences in affect and personality ratings. Correlation analyses were run to check for relationships between eye-movement parameters and behavioral responses and affect ratings in the High Ns group. In order to simplify correlation analysis, eye-movement data were collapsed across different facial expression intensities.