As is often the case with training neural networks, the problem is rooted in gathering large amounts of data on which to train. For this, we make use of the AnyLogic parallelized computation ability, to simulate large batches of simulation runs at the same time. In doing so, we strategically sample across a large range of parameters, near the domain of interest required of real-world scenarios. Parameter simulation ranges can be found in Table 2.