Figure 4 Each layer in our network model consists of a fully connected tensor with a Relu activation function. The first four all utilize a 20% dropout layer for regularization, while the final feature layer does not.