Methods Survey Development Conceptualization of the survey came from an ongoing partnership between academic researchers, a federally qualified health center (FQHC), and an equity-driven non-profit that serves as a hub for community leadership, empowerment, and transformation through social engagement. Our main project focused on exploring the experiences and dimensions of social exclusion and their effects on health outcomes. Academic partners analyzed the existing literature on social exclusion. The non-profit and FQHC partners organized three focus groups in Allentown, Pennsylvania: The first with Latinx populations, the second with Black men, and the third with immigrant populations. All partners trained community members who then facilitated the focus groups. For example, a Latino man was trained to facilitate the Latinx focus group. In these focus groups, we found that participants experienced specific salient stressors that shaped their health outcomes, conditions that were neither regularly captured in our population health surveillance surveys nor were in the broad literature on social determinants of health. Using the data from focus groups, academic partners began developing a brief but comprehensive survey that includes these experiences. We worked with our non-profit and FQHC partners in a process that involved multiple conversations with community members who have a broad range of expertise. They included religious leaders, teachers, students and interns, health care providers, previously incarcerated and justice-involved individuals, and people with multiple chronic conditions, including substance use disorders. University partners searched for any existing instruments consistent with the experiences of marginalized communities. Community members critiqued some of the existing instruments to ensure that word choices reflected their experiences and co-created new measures. Measures Novel measures of stressors such as a range of negative encounters with the police and assessments of whether those encounters were necessary were included to assess experiences of police brutality. We conceptualize police brutality not merely as the use of force by a police officer, but police action that dehumanizes the victim, even without conscious intent [19, 20]. Respondents were provided with the following examples of police actions: police cursed at respondent; police searched, frisked, or patted the respondent; police threatened to arrest the respondent; police handcuffed the respondent; police threatened the respondent with a ticket; police shoved or grabbed the respondent; police hit or kicked the respondent; police used pepper spray or another chemical on the respondent; police used an electroshock weapon such as a stun gun on the respondent, and police pointed a gun at the respondent. For each of these actions, respondents were asked whether it never happened to them, has happened about once or twice in their lives, happens a few times a year, about once a month, or happens about weekly. SHUR also assessed respondents’ evaluations of the necessity of the police actions they had experienced. They were asked: “Thinking of your most recent experience(s) with the police, would you say the action of the officer was necessary?” Our focus group participants contend that individual perceptions of the necessity of police actions are important indicators of the dehumanizing impact of police violence. We also assessed the likelihood of calling the police if there is a problem, worries about potential police brutality, arrest or incarceration, and cause-specific stressors such as race-related impression management, concerns about housing, food, and medical bills. We collected data on reasons for perceived discrimination such as race, language or accent, religion, immigration status, sexual orientation, and gender identity. We also assessed spaces and perpetrators of discrimination—whether discrimination was experienced at work, school, or perpetuated by a health care provider, police or security officer, or an individual in one’s neighborhood. Other novel measures included in the survey are relational aspects of health care delivery, such as respondents’ perceptions of respect during their clinical encounter, and specifically by receptionists, nurses, medical or nursing assistants, and physicians. The survey included three indicators of respondents’ sense of social exclusion, feeling like they are not trusted, often feeling left out, and not feeling like a member of a community. We also included existing measures of stressors such as discrimination using the Everyday Discrimination and the Heightened Racial Vigilance scales [21], Group-Based Medical Mistrust scale [22], and the Adverse Childhood Experiences (ACEs) module [23]. We included the following measures of health status: self-rated health, activity limitations (respondent limited in any way in any activities because of physical, mental, or emotional problems), self-rated mental health, and depression and anxiety using the two-item patient health questionnaire [24]. Indicators of access to care include usual source of care, health insurance, perceived unmet need for medical care, perceived unmet need for mental health care, past use of mental health services, and the probability of seeking mental health care. Sociodemographic data collected include race, gender identity, sexual orientation, age, marital status, level of education, work status, years in the USA if born outside of the USA, and zip code. The survey instrument was pre-tested among a small subset of community members in Allentown (n = 11). Revisions were made, and the survey was then piloted using a convenient online sample (n = 100) with respondents from 65 zip codes across the country, majority being from the East Coast. The final version of the survey, after piloting, is presented in Appendix 1. Approval from Lehigh University’s Institutional Review Board was obtained both for the initial social exclusion focus groups and for the survey. The focus groups and survey were funded internally by Lehigh University’s Community-engaged Health Research Fellowship and the Faculty Innovation Grant, respectively. Data Collection The SHUR employed quota sampling, a non-probability sampling approach where we looked for specific characteristics of respondents and then obtained a tailored sample that is representative of the population of interest. The target was 4000 respondents living in urban areas in the contiguous USA. We assigned quotas for usual source of care and race/ethnicity. Black, Indigenous, and people color, as well as those who are poor, are more likely to receive care at specific sites rather than from a specific primary care physician with whom they have established a relationship [25]. Having a regular source of care, and the kind of place that people go to for usual care matters for relational aspects of care such as perceived respect and mistrust. Given this literature, we assigned a quota for usual source of care. At least half of the sample (n = 2000) must report a clinic or community health center, an emergency department or urgent care facility as their usual source of care, or report that they did not have a usual source of care. The second quota was specific for race/ethnicity. Because we needed 4000 respondents, 1000 respondents (at least 25%) must be people of color and no more than 65% should be non-Hispanic White. This falls within the range of the US Census and Pew Center estimates of the racial demographics of urbanized areas and provides enough sample sizes to complete analysis by race/ethnicity. We contracted with Qualtrics because their panels are relatively more demographically representative than other online survey platforms for convenience sampling [26]. Qualtrics invited respondents by partnering with over 20 Web-based panel providers to access potential respondents based on the specified quotas. Respondents received some form of incentive from panel providers, but the specific value of the incentive was not disclosed to researchers. Qualtrics monitored the specified quotas using screening questions on race/ethnicity and usual source of care. For example, when enough non-Hispanic Whites had completed the survey, anyone who identified as non-Hispanic White who expressed interest in taking the survey was not redirected to the full survey. This process continued until the quotas were met. A total of 7495 persons passed the screeners and met the quota requirements. Qualtrics performed quality checks on the data and removed incomplete responses. They also assessed the time it took for respondents to complete the survey. The median time for survey completion was 10 min. Respondents who took less than a third of the median time to complete the survey were excluded from the final sample because of the possibility that they were not paying attention to the questions and might have been checking response boxes as quickly as possible. After these checks, we were left with 4389 completed responses.