Reproducing the Results

Jump to navigation Jump to search
Revision as of 10 August 2018 at 17:38.
The highlighted comment was created in this revision.

Reproducing the Results

WhiteFang 2.2.7.1 is the best version of WhiteFang but it had some issues and I have been trying to reproduce the results for a long time and as you can understand I couldn't match its performance.
Problem 1
I had a problem with my KNNPredictor class. As the number of K increased or sum of attributes' weights increased it would return bigger numbers which would cause my Simple predictor to have three times more effect than the Standard predictor(Normally it should have half of its effect).
Problem 2
The flattener would log real waves twice and this would decrease the number of data points I could find and weight real waves two times more.
Additional Note: When I fixed the flattener problem my score decreased.
I don't know how to solve it since Simple Formula Standard Formula and Flattener Formula has different attributes and Standard formula and Flattener Formula has 1 / (x * k + 1) type of attributes. Any solutions?
    Dsekercioglu (talk)14:19, 7 August 2018

    Welcome to the land of Performance Enhancing Bug! When facing things like this, there are two ways to go — leave it, or fully understand how it works and reproduce it!

    It seems that you have mostly understood how the bug works, then just tweak your formula to fit the bug in!

    Btw, have you ever tried putting two exactly same bot with different version, and see the score difference?

      Xor (talk)14:33, 7 August 2018
      I will just be reverting back to 2.2.7.1 it with the XOR filter which doesn't log real waves twice and I will use my better firepower formula to have a better score.
      I don't think putting exactly the same bot will help because no bots has been change since WhiteFang 2.2.7.1 but you may be right; my Bullet Shielding algorithm may cause extreme deviations in score.
        Dsekercioglu (talk)16:32, 8 August 2018

        The only purpose of putting the same bot twice with the same environment is to see how much noise there is with your testing method. Experience of improving ScalarBot told me that most of the time a small change in score is just noice; big drop is bug and big increase is new critical feature.

          Xor (talk)15:42, 10 August 2018
          I will put 2.2.7.1N after 2.4.5; maybe it is just a great noise which I had been trying to pass for a long time =)
            Dsekercioglu (talk)18:38, 10 August 2018
             
             
            I have just wondered how you normalize dangers in ScalarBot or since it's not OS what is the general way of doing it? In the latest version of WhiteFang I use
            weight * MaximumPossibleDistance / (euclidean_distance + 1) / predictionNum
            

            to have a balance between different weighting schemes and K's.

              Dsekercioglu (talk)14:23, 9 August 2018

              Just have a look at DrussGT and Diamond, ScalarBot uses similar formula. And most (new) bots imitate this style as well.

                Xor (talk)16:13, 10 August 2018
                 
                 

                Btw, in my opinion, Performance Enhancing Bugs are not bugs. They either fix another bug occasionally or fix a bug in your logic. There must be a reason behind score difference, so just respect the result.

                  Xor (talk)14:38, 7 August 2018