We utilized an 80–20 train-test split to train a 5-layer feed forward neural network, which is shown in Figure 4. We trained the network using the Adam optimizer with a learning rate of 0.001, minimizing the mean absolute error, in batches of 256. Training was stopped after 100 epochs, with the test and validation error displayed in Figure 5.