Thread history
Time | User | Activity | Comment |
---|---|---|---|
No results |
Oh wow, missed this! Awesome work Kev, you have a history of popping up with surprise entries =)
I'd be curious to know more about the Tensorflow work you did to make the KNN features...
Thanks! I wrote a brief description under BeepBoop/Understanding_BeepBoop, but I'll release the code too once I get it cleaned up.
Aha, I missed the last section. Surprised there wasn't more to gain from some kind of deeper embedding model.
Me too, and I'll maybe revisit it at some point. Theoretically a deeper embedding model could learn feature interactions like "wall-ahead is more important when velocity is 8 than when it is 0"
I’m surprised as well. Btw, how many layers are you using in the deeper model? And is that fully connected? I guess some deeper models with explicit feature interactions may work better in robocode scenario, given high noise. I would try things like Deep&Cross, DeepFM, etc.
I tried a few (pretty simple) variants:
- Multiplying the features by a weight matrix. One nice feature of this is that a diagonal matrix recovers standard feature weighting, so this model should be strictly better than per-feature weights.
- A one-hidden-layer feedforward network.
- Summing up the embeddings from the above two.
I totally agree that allowing multiplicative feature interactions as you suggest should work better though!
It's possible that the KNN already takes that into account sufficiently. Maybe if you bump the cluster size up a lot, and change the kernel width for cluster weighting, it might force this part of the learning into the NN instead?
It feels like some cascade model (widely used in ads & recommendation), by putting one deep model on top of some simple & very fast model, with the input of the former being the output of the latter. This architecture simplifies computation by magnitude of degrees, but also restricted the power of the deep model. A best practice then is to make the simple model fitting the output of the deep one, and retrain deep model on top of new input, and repeat...
I tend to think it's right that the KNN would take such relationships of features into account in a sense, but as a statistical model what it cannot do is generalize, which increases the number of data points needed to effectively cover some areas of the input space. In many ways, for this sort of usage, I would conceptualize the potential advantage of a deep embedding not as learning the feature interactions themselves, so much as learning the generalized contour of when to de-weight features, as a noise filter of sorts. This is a bit of a tangent, but thinking of in in terms of being like a noise filter, and also considering things like BeepBoop's velocity randomization, I also start to wonder if there could be some value in including not just the present feature values as inputs to deep embeddings, but including several ticks worth of feature history. Let the embedding learning have the potential to construct it's own temporally filtered (or rate of change) features.