1. Introduction Interactions between the life history of a pathogen and the environment in which it is embedded drive the evolution of virulence. These interactions thus dictate both the experience of disease at the individual host level and the shape of disease dynamics in host populations [1,2]. The nature of the interaction between virulence and transmission has been the object of both theoretical and empirical examination [2,3,4,5,6,7,8]. Free-living survival, here defined as the ability of a pathogen to persist outside of its host, is one of many transmission life-history traits associated with virulence. The relationship between the two varies between host–pathogen types and different environments [4,8,9,10]. Several hypotheses serve as the canon in the evolution of virulence, theorizing its relationship with transmission traits. The Curse of the Pharaoh hypothesis—Named after a tale about a mythical curse that torments individuals who dig up tombs of Egyptian pharaohs [11]—Suggests that, if a parasite has high free-living survival, then it is far less dependent on its host for transmission and, consequently, will have no evolutionary incentive to decrease virulence [2,4,12]. The potential negative fitness consequences of killing hosts rapidly (being highly virulent) can be counteracted by persisting in the environment until the arrival of new susceptible hosts. Any presumptive selection on beneficence may be relaxed: parasites can detrimentally affect the health of hosts at no cost to transmission because most of their life cycle is spent outside of a host. Previous studies support a positive correlation between free-living survival and mortality per infection (a common proxy for virulence) [13]. Alternatively, the “tradeoff” hypothesis suggests that there is some intermediate level of parasite virulence [3,6,14] that is optimal for a given setting. In this scenario, too high a virulence kills the host and parasite and too low a virulence leads to failure to transmit. Applying this hypothesis specifically to free-living survival would suggest that selection for increased free-living survival should come at the expense of virulence (producing a pathogen that is less harmful to the host). Mechanistically, as a consequence of increased adaptation to a nonhost environment, a virus may be less fit to replicate inside a host [9,15]. For example, a more robust viral capsid may help to survive harsh environmental conditions but may make it more difficult to package RNA/DNA [15]. More generally, the tradeoff hypothesis can be framed in the context of a life-history tradeoff: investment in certain parts of the life cycle often comes at the expense of others [2,16]. Theoretical studies have explored varying evolutionary relationships between heightened virulence and extreme pathogen longevity [4,5,12,17,18,19]. One critical component of these studies revolves around whether virulence evolves independently of free-living survival. For example, some models have argued [4] that pathogen virulence is independent of survival under a set of conditions: when the host–pathogen system is at an equilibrium (evolutionary and ecological), if host density fluctuates around an equilibrium, or if turnover of the infected host population is fast relative to the pathogen in the environment. However, if the host–pathogen system is at disequilibrium and if the dynamics of propagules in the environment are fast compared to the dynamics of infected hosts, then virulence is, as hypothesized, an increasing function of propagule survival [4]. Kamo and Boots [17] examined this hypothesis by incorporating a spatial structure in the environment using a cellular, automata model and found that, if virulence evolution is independent of transmission, then long-lived infective stages select for higher virulence. However, if there is a tradeoff between virulence and transmission, there is no evidence for the Curse of the Pharaoh hypothesis, and in fact, higher virulence may be selected for by shorter rather than long-lived infectious stages. Further, the evolution of high virulence does not have to occur solely through a transmission–virulence tradeoff. Day [18] demonstrated how pathogens can evolve high virulence and even select for traits to kill the host (e.g., toxins) if pathogen transmission and reproductive success are decoupled. These studies emphasized the context-dependence of virulence–survival relationships. Understanding where in the relationship between virulence and survival a given pathogen population exists may allow one to understand how virus evolution will manifest at the level of epidemics. In this study, we examine the epidemic consequences of different virulence–survival relationships—Positive and negative correlation—In a viral disease with an environmental transmission component. In order to measure how pathogen survival influences disease dynamics, we included an environmental compartment in our model, which represents contaminated environments that act as a reservoir for persisting pathogens, causing disease spread when they come in contact with susceptible individuals (infection via “fomites”) [20,21]. We find that the identity of the virulence–free-living survival relationship (e.g., positive vs. negative) has distinct implications for how an epidemic will unfold. Some, but not all, features of an outbreak are dramatically influenced by the nature of the underlying virulence–survival relationship. This indicates that signatures for evolution (adaptive or other) in a pathogen population will manifest more conspicuously in certain features of an outbreak. We reflect on these findings in light of their theoretical implications on the evolution and ecology of infectious disease and for their potential utility in public health interventions.