2.2. The AI Model The high fidelity AnyLogic model suffers at inference time, due to the computation cost of running the simulation. A single simulation run can take up to 90 s, which may not be efficient in any real time analysis tasks. Due to the stochastic nature of the drive-through simulation, a significant number of simulations runs (using Monte Carlo method) is needed for each parameter setting that requires more simulation time. In order to alleviate this, we attempt to train a neural network to predict the outputs of the simulation based on the model parameters. As is often the case with training neural networks, the problem is rooted in gathering large amounts of data on which to train. For this, we make use of the AnyLogic parallelized computation ability, to simulate large batches of simulation runs at the same time. In doing so, we strategically sample across a large range of parameters, near the domain of interest required of real-world scenarios. Parameter simulation ranges can be found in Table 2. We chose to sample all variables uniformly, as we do not have any distribution knowledge of the real-world application of the model. Furthermore, by sampling in such an unbiased fashion, we reduce the chances of the network overfitting based on some intrinsic property of our training set. Some conditional restrictions are built regarding the binary variables, as noted in Table 1. Altogether, we generated about 125 k simulation samples, providing us with a dense enough training domain.