The high fidelity AnyLogic model suffers at inference time, due to the computation cost of running the simulation. A single simulation run can take up to 90 s, which may not be efficient in any real time analysis tasks. Due to the stochastic nature of the drive-through simulation, a significant number of simulations runs (using Monte Carlo method) is needed for each parameter setting that requires more simulation time. In order to alleviate this, we attempt to train a neural network to predict the outputs of the simulation based on the model parameters.