3.1.3DC vs 3.1.3

Jump to navigation Jump to search
Revision as of 16 January 2014 at 15:50.
The highlighted comment was created in this revision.

3.1.3DC vs 3.1.3

Edited by author.
Last edit: 17:17, 25 December 2013

Which one is better Edit:found out that 3.1.3DC uses GoTo surfing

    Tmservo (talk)21:37, 24 December 2013

    Both 3.13 and 3.13DC use GoTo surfing. 3.13DC uses DC, while 3.13 uses some form of VCS. (Correct me if I am wrong)

      Straw (talk)23:16, 24 December 2013

      Correct :-) In the movement, to be more specific.

      And considering I write a changelog, I don't see how this question is anything other than lazy at its worst.

        Skilgannon (talk)07:04, 25 December 2013

        I was always wondering why the best bot used VCS, DC seems much more elegant. Does it improve performance in your tests?

          Straw (talk)08:05, 25 December 2013

          For some reason I've never managed to get the DC to perform as well as the VCS, so it still used VCS. I remember Jdev commenting that a range search worked better for him than a KNN search in movement, so I'll be trying that next.

            Skilgannon (talk)21:34, 25 December 2013

            Have you tried doing something similar to your many randomized attribute buffers with kD-Trees? You could make 100 trees, each with a random subset of the predictors, then combine the results. You could even start weighting some tree's results higher if they perform better.

              Straw (talk)00:36, 14 January 2014

              In my mind, a tree is more heavyweight than a buffer. You need multiple buffers to begin to approximate the smoothing you get from KNN and kernel density. I use multiple trees in my surfing for the same reasons, but more on the order of 10 than 100.

                Voidious (talk)01:03, 14 January 2014
                 

                I was running some tests over the holidays, and it seems I've improved my DC to the point where against weaker bots it is as good as VCS. However, against top bots VCS movement is still much stronger, ~15% difference on the MC2K7.

                  Skilgannon (talk)07:08, 14 January 2014

                  How did you improve it?

                    Tmservo (talk)13:19, 14 January 2014

                    Adjusting rolling speed and changing the shape functions of some attributes.

                      Skilgannon (talk)20:17, 14 January 2014

                      What is rolling speed and shape functions

                        Tmservo (talk)23:51, 14 January 2014

                        Rolling speed is part of his moving average algorithm, basically how much new values supersede older ones based on time.

                        I assume shape functions here he means the shape of the kernel function he uses.

                          Chase08:18, 15 January 2014

                          Close. The shape functions are the nonlinear scalings I do on attributes before adding them to the tree.

                            Skilgannon (talk)08:43, 15 January 2014

                            Oooh, that makes more sense. :)

                              Chase17:23, 15 January 2014
                               

                              Shape functions are part of the distance function then.

                                MN (talk)13:09, 16 January 2014
                                 
                                 
                                 
                                 
                                 
                                 

                                Could someone explain why averaging the results from many random trees is stronger than using a single well-tuned tree?

                                  MN (talk)13:13, 16 January 2014

                                  I would suspect it might make your nearest-neighbours come from multiple perspectives, giving you areas of concavity in your nearest-neighbour function instead of just a pure convex search area. I also suspect using some fancy pre-processing on tree attributes (perhaps dimension reduction/PCA) before adding could give equivalent search patterns.

                                    Skilgannon (talk)13:56, 16 January 2014
                                     

                                    I'd answer this in 3 parts.

                                    1. There are some high level movement classes that are worth segmenting. Against simple targeters, time since velocity change is just noise. Against most bots, a flattener would be noise. But for a bot where a flattener helps, those lower levels of stats don't hurt. I think they even add "harmless noise" - they are still bullet dodging, so they won't make horrible decisions. So I have a few tiers (simple, normal / decaying, light flattener, flattener) in my movement stats, enabled at different enemy hit percentages.
                                    2. I found VCS to be easier to tune that DC. Similarly, I think layering a few trees is easier than trying to add features to your KNN system to create the exact "shapes" (or however you imagine it) that you want. "5 of last 150 + 5 of last 500 + 5 of last 1500" is easy to understand. Adjusting the weights and distancing to produce the same results from one KNN call seems hard.
                                    3. I can't prove that it is.
                                      Voidious (talk)16:50, 16 January 2014
                                       
                                       

                                      DC is more versatile than VCS. There is more room for improvement, but also more room to screw up. And WaveSim doesn't work with movement.

                                      It is harder to unlock all the potential from DC in movement.

                                        MN (talk)01:20, 14 January 2014