Difference between revisions of "Thread:Talk:Pris/Dodging Performance Anomaly?/reply (4)"
Skilgannon (talk | contribs) m (Reply to Dodging Performance Anomaly?) |
Skilgannon (talk | contribs) m |
||
Line 1: | Line 1: | ||
− | I've looked at random forests before, another one which interested me was Extreme Learning Machines which are feed-forward NNs working in an ensemble. The trouble I found was that even though these methods are fast when compared to other machine learning techniques (K-means, feedback NN), they are still much slower than a single KNN call in a Kd-Tree just because of the amount of data they need to touch for each 'event'. A Kd-Tree trains in O(1) and classifies in O(logN). | + | I've looked at random forests before, another one which interested me was Extreme Learning Machines which are feed-forward NNs working in an ensemble. The trouble I found was that even though these methods are fast when compared to other machine learning techniques (K-means, feedback NN, SVM), they are still much slower than a single KNN call in a Kd-Tree just because of the amount of data they need to touch for each 'event'. A Kd-Tree trains in O(1) and classifies in O(logN). |
Feel free to prove me wrong though =) I'd love something which works well beyond the ubiquitous Kd-Tree! | Feel free to prove me wrong though =) I'd love something which works well beyond the ubiquitous Kd-Tree! |
Revision as of 11:08, 16 November 2013
I've looked at random forests before, another one which interested me was Extreme Learning Machines which are feed-forward NNs working in an ensemble. The trouble I found was that even though these methods are fast when compared to other machine learning techniques (K-means, feedback NN, SVM), they are still much slower than a single KNN call in a Kd-Tree just because of the amount of data they need to touch for each 'event'. A Kd-Tree trains in O(1) and classifies in O(logN).
Feel free to prove me wrong though =) I'd love something which works well beyond the ubiquitous Kd-Tree!
Another thing to consider is how you are going to pose the question. A lot of the successful NN-based approaches have used a bunch of classifiers, one for each potential firing angle, and shooting at the one with the highest probability. Others have tried posing it as a straight regression problem, but I don't think those worked as well, possibly because of the high noise (against top bots you are lucky to get a 10% hitrate).
I'd be interested to hear what you end up trying, and how it works out.