We trained the network using the Adam optimizer with a learning rate of 0.001, minimizing the mean absolute error, in batches of 256.