Difference between revisions of "Thread:Talk:Pris/Dodging Performance Anomaly?/reply (4)"
Skilgannon (talk | contribs) m (Reply to Dodging Performance Anomaly?) |
Skilgannon (talk | contribs) m (Add comment on Naive Bayes, fix add speed of Kd-Tree) |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
− | I've looked at random forests before, another one which interested me was Extreme Learning Machines which are feed-forward NNs working in an ensemble. The trouble I found was that even though these methods are fast when compared to other machine learning techniques (K-means, feedback NN), they are still much slower than a single KNN call in a Kd-Tree just because of the amount of data they need to touch for each 'event'. A Kd-Tree trains in O( | + | I've looked at random forests before, another one which interested me was Extreme Learning Machines which are feed-forward NNs working in an ensemble. The trouble I found was that even though these methods are fast when compared to other machine learning techniques (K-means, feedback NN, SVM), they are still much slower than a single KNN call in a Kd-Tree just because of the amount of data they need to touch for each 'event'. A Kd-Tree trains incrementally in O(logN) and classifies in O(logN), with N being the number of items in the tree. I think the only thing faster would be a Naive Bayes classifier. |
Feel free to prove me wrong though =) I'd love something which works well beyond the ubiquitous Kd-Tree! | Feel free to prove me wrong though =) I'd love something which works well beyond the ubiquitous Kd-Tree! |
Latest revision as of 22:18, 16 November 2013
I've looked at random forests before, another one which interested me was Extreme Learning Machines which are feed-forward NNs working in an ensemble. The trouble I found was that even though these methods are fast when compared to other machine learning techniques (K-means, feedback NN, SVM), they are still much slower than a single KNN call in a Kd-Tree just because of the amount of data they need to touch for each 'event'. A Kd-Tree trains incrementally in O(logN) and classifies in O(logN), with N being the number of items in the tree. I think the only thing faster would be a Naive Bayes classifier.
Feel free to prove me wrong though =) I'd love something which works well beyond the ubiquitous Kd-Tree!
Another thing to consider is how you are going to pose the question. A lot of the successful NN-based approaches have used a bunch of classifiers, one for each potential firing angle, and shooting at the one with the highest probability. Others have tried posing it as a straight regression problem, but I don't think those worked as well, possibly because of the high noise (against top bots you are lucky to get a 10% hitrate).
I'd be interested to hear what you end up trying, and how it works out.