3.1.3DC vs 3.1.3
← Thread:Talk:DrussGT/Version History/3.1.3DC vs 3.1.3/reply (16)
Could someone explain why averaging the results from many random trees is stronger than using a single well-tuned tree?
I would suspect it might make your nearest-neighbours come from multiple perspectives, giving you areas of concavity in your nearest-neighbour function instead of just a pure convex search area. I also suspect using some fancy pre-processing on tree attributes (perhaps dimension reduction/PCA) before adding could give equivalent search patterns.
I'd answer this in 3 parts.
- There are some high level movement classes that are worth segmenting. Against simple targeters, time since velocity change is just noise. Against most bots, a flattener would be noise. But for a bot where a flattener helps, those lower levels of stats don't hurt. I think they even add "harmless noise" - they are still bullet dodging, so they won't make horrible decisions. So I have a few tiers (simple, normal / decaying, light flattener, flattener) in my movement stats, enabled at different enemy hit percentages.
- I found VCS to be easier to tune than DC. Similarly, I think layering a few trees is easier than trying to add features to your KNN system to create the exact "shapes" (or however you imagine it) that you want. "5 of last 150 + 5 of last 500 + 5 of last 1500" is easy to understand. Adjusting the weights and distancing to produce the same results from one KNN call seems hard.
- I can't prove that it is.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:DrussGT/Version History/3.1.3DC vs 3.1.3/reply (19).
But then each tree should be specifically tuned against a specific kind of gun. Then each tree outputs a spike at a different GF, which shouldn't be a problem since you can dodge many GFs at once.
But generating dimensions at random to mimic DrussGT 100 buffers is another matter entirely. A combination of dimensions which don't relate to any gun is supposed to hurt classification. Although I can't prove it either.
I'd tend to expect that when the "correct" parameters of the model (i.e. weightings of dimensions) are have more uncertainty than is in the resulting prediction of any one model, the consensus among a diverse set of models is less likely to be completely wrong than any one model. Or to put it another way, perhaps there there is no single well-tuned tree that fits all opponents of a large-ish category (i.e. "specific kind of gun") well enough to outperform a consensus of different models, and while there may exist well-tuned trees for smaller categories of opponents, the battles might not be long enough to reliably detect which would be the best category. That's all just conjecture of course though.