9.7 INTERPRETING THE WEIGHTS IN A NEURAL NETWORK MODEL
The weights in a neural network model represent what the model is trying to tell you, given the data. These weights are analogous to the predictor coefficients in a regression model. Let us glean what information we can from the weights in Figure 9.7.
First, let us ignore the bias (constant term) weights B1 and B2, since they do not affect the relationship between the predictors and the response. Next, recall our exploratory data analysis (EDA), where we found that greater age and being male were both associated with higher probability of death in the Framingham Heart Study. Also, Sex is a binary predictor with 1 = Male and 2 = Female, so that an increase in the value of Sex should be associated with a decrease in probability of death. Let us see how and whether these EDA results are reflected in the neural network weights.
Now, the weight between the hidden layer node H1 and the output node O1 takes a negative value, WH1O1 = ...