Xavier initialization is an attempt to improve the initialization of neural network weighted inputs, in order to avoid some traditional problems in machine learning. Here, the weights of the network are selected for certain intermediate values that have a benefit in machine learning application.
Some experts explain that Xavier initialization helps machine learning technologies to converge, because the neuron activation functions are in a decent range — in the words of some data scientists, not in "saturated" or "dead" regions: balanced in weighting in a way that facilitates better results.
Weighted inputs lead to the transfer function, which leads to the activation function and the eventual result. In Xavier initialization, there's the philosophy that the variance of the outputs of a network layer should be equal to the variance of the inputs, which again leads to a kind of stability and stasis in machine learning procedures.
0 Comments