Rolling Average vs Gradient Descent with Softmax & Cross Entropy

Jump to navigation Jump to search

Rolling Average vs Gradient Descent with Softmax & Cross Entropy

Edited by author.
Last edit: 05:49, 27 July 2021

If each Guess Factor bin is considered an output unit before Softmax (logit), and loss is Cross Entropy, then the gradient of each logit is then:

qi - 1, if bin is hit
qi, otherwise

If gradient is not applied on logit as normal, but instead applied on qi itself, then:

qi := qi - eta * (qi - 1) = (1 - eta) * qi + eta * 1, if bin i hit
qi := qi - eta * qi = (1 - eta) * qi + eta * 0, otherwise

Which is essentially rolling average, where eta (learning rate) equals to the alpha (decay rate) in exponential moving average.

Anyway this analog isn't how rolling average works, as logit isn't qi actually. But what if we replace rolling average with gradient descent? I suppose it could learn even faster, as the outputs far from real value get higher decay rate...

Xor (talk)04:49, 27 July 2021

Then one step further, you don't use VCS any more, instead add the logits of velocity bins, accel bins and distance bins, etc., all together. This structure is essentially estimating the probability as a multiplication of probability when e.g. velocity is high and distance is close.

If the movement profile relating to velocity, distance, etc. is independent, this approach will be mostly the same as traditional segmented VCS, with more data points.

Note that this approach is essentially a neural network without hidden units, or multiclass logistic regression.

Xor (talk)05:15, 27 July 2021