Thread history

Fragment of a discussion from Talk:DrussGT/Version History
Viewing a history listing
Jump to navigation Jump to search
Time User Activity Comment
No results

I'd tend to expect that when the "correct" parameters of the model (i.e. weightings of dimensions) are have more uncertainty than is in the resulting prediction of any one model, the consensus among a diverse set of models is less likely to be completely wrong than any one model. Or to put it another way, perhaps there there is no single well-tuned tree that fits all opponents of a large-ish category (i.e. "specific kind of gun") well enough to outperform a consensus of different models, and while there may exist well-tuned trees for smaller categories of opponents, the battles might not be long enough to reliably detect which would be the best category. That's all just conjecture of course though.

Rednaxela (talk)05:19, 17 January 2014

No proofs and only conjectures, but convincing enough.

I´ll try using multiple classifications generated at random in my next version and see what happens.

MN (talk)14:46, 17 January 2014