We began our analyses by testing the cross‐cultural equivalence of the PSS‐10. To this end, in a multi‐group factor analysis in R (R Core Team, 2014), using the following packages: lavaan (Rosseel, 2012), and semTools (Jorgensen et al., 2019), we compared the models assuming a two‐factor structure (positive and negative, with the latter consisting of reversed items; Roberti et al., 2006) across 26 countries and areas (configural invariance), with a model with factor loadings and latent correlations constrained to be equal (metric invariance), and items' intercepts to be the same in all groups (scalar invariance). When evaluating the model fit, we relied on the usually applied criteria (Hu & Bentler, 1999), in which a comparative fit index (CFI) and Tucker Lewis Index (TLI) above .90 indicate adequate fit, whereas a root‐mean‐square error of approximation (RMSEA) below .08 and a standardised root‐mean‐square residual (SRMR) below .06 indicates no misfit. When evaluating measurement equivalence, we compared the configural invariance model with the metric invariance model, and then the metric invariance model with the scalar invariance model. As these models were characterised by a growing complexity (each subsequent model was nested within the previous one), while assessing models’ superiority we relied on the cut‐off criteria recommended for testing measurement invariance: a change of CFI (∆CFI) less than .01 (∆CFI < .01), a change of RMSEA of less than .015 (∆RMSEA < .015), and a change of SRMR less than .01 (∆SRMR < .01) would indicate that the two models compared do not differ in terms of model fit (Chen et al., 2008; Cheung & Rensvold, 2002).