Thread history
Jump to navigation
Jump to search
Time | User | Activity | Comment |
---|---|---|---|
No results |
I'd tend to expect that when the "correct" parameters of the model (i.e. weightings of dimensions) are have more uncertainty than is in the resulting prediction of any one model, the consensus among a diverse set of models is less likely to be completely wrong than any one model. Or to put it another way, perhaps there there is no single well-tuned tree that fits all opponents of a large-ish category (i.e. "specific kind of gun") well enough to outperform a consensus of different models, and while there may exist well-tuned trees for smaller categories of opponents, the battles might not be long enough to reliably detect which would be the best category. That's all just conjecture of course though.