Thread history

Fragment of a discussion from User talk:AW/KNN
Viewing a history listing
Jump to navigation Jump to search
Time User Activity Comment
No results

Hmm, Mahalanobis distance looks interesting. I have considered using principal component analysis before, which I'd expect to give somewhat similar results (i.e. both normalize for covariance), but never got around to trying it.

Regarding performance issues building a covariance matrix for the entire tree, I also think that doing research into incremental methods of building/updating a covariance matrix may be key to making mahalanobis distance practical for robocode usage.

I also wonder if it may make the most sense to make smaller covariance matrices based on either the most recent X data points, or certain regions in the data, etc. They would be faster to compute, and may be able to take advantage of how the ideal weight of each axis can vary over the search space.

Edit: Oh, one other thought that comes to mind is... perhaps it would make sense to precompute a covariance matrix for your targeting attributes, based on a large data set from battles with many robots? It wouldn't be as nice as computing it on-the-fly for what you're against, but I'd expect it to have some positive impact anyway.

Rednaxela14:16, 19 October 2012