Awesome enty

Jump to navigation Jump to search

I tend to think it's right that the KNN would take such relationships of features into account in a sense, but as a statistical model what it cannot do is generalize, which increases the number of data points needed to effectively cover some areas of the input space. In many ways, for this sort of usage, I would conceptualize the potential advantage of a deep embedding not as learning the feature interactions themselves, so much as learning the generalized contour of when to de-weight features, as a noise filter of sorts. This is a bit of a tangent, but thinking of in in terms of being like a noise filter, and also considering things like BeepBoop's velocity randomization, I also start to wonder if there could be some value in including not just the present feature values as inputs to deep embeddings, but including several ticks worth of feature history.

Rednaxela (talk)18:07, 21 June 2021

Including several ticks of history seems like a nice way of removing the need for hand-crafted features like acceleration, time-since-velocity-change, distance-last-k-ticks, etc., and having the model learn them instead. Maybe a good model could even learn some PM-like behaviors.

Definitely a weakness of KNNs is generalization to new parts of the input space. I did think a bit about pre-training a model against a lot of bots and then quickly adapting it to the current opponent (maybe using meta-learning methods) so it would generalize better early in the match before it gets lots of data. On the other hand, aiming models get a lot of data pretty quickly, so I'm not sure of how much of an issue poor generalization really is.

--Kev (talk)19:52, 22 June 2021

I would say it probably depends what you're targeting. When targeting a strong surfer, I would say there's potentially a lot of value in maximizing the utility of data learned since the surfer last got information from collisions, and so that's a scenario where generalizing seems potentially more important in my eyes.

(unless it's going 100% flattener, in which case I would say the value is adapting on time scales that are simply different from what it's flattening over, either learning faster the flattener, or learning long-term "history repeating itself" trends/patterns that it loses sight of)

Rednaxela (talk)21:05, 22 June 2021
 

Also some deep enough model can learn how surfers (without flattening) "surf" hits, just like networks like "Deep Interest Network" used in CTR that learns how users' interest change over time. However our current use of KNN allows nothing like this. Maybe some end-to-end approach exists for robocode scenario.

My past experiments (shallow NN that does online-learning without pre-training) with end-to-end approach didn't yield anything interesting though.

Xor (talk)03:00, 23 June 2021

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:Talk:BeepBoop/Awesome enty/reply (23).