Talk:WhiteFang

From Robowiki
Jump to navigation Jump to search

Contents

Thread titleRepliesLast modified
Anti-Surfer Targeting1812:01, 25 March 2019
Reproducing the Results1111:01, 11 August 2018
First page
First page
Previous page
Previous page
Last page
Last page

Anti-Surfer Targeting

I have been tuning WhiteFang's gun lately and while I could get a lot of success in TCRM with genetic algorithms my AS targeting isn't improving a bit.
Thinking logically finding weights with genetic algorithm shouldn't be as effective against learning movement as they change how they move but I had though it would do better than some randomly assigned weights.
Currently the best score I have got is 69.36 with the current AS targeting which is just the main gun with 10 times the decay and lower K.
Any advice about how to tune it?
Dsekercioglu (talk)11:47, 19 March 2019

I found recorded data, like WaveSim, still quite useful when it comes to hit adaptive movement.

The reason maybe anti-intuitive, but as long as their movement & your gun can be considered random, it’s still logical.

And although intuitively wave surfing is not random, in fact, they are. The reason is simple, non-linearity & self feedback.

They are only not random with specific information, but certainly you can’t have them for everyone.

And about genetic tuning, I think they are useful, but only after you get basic stuff right, e.g. precise intersection, using only firing waves etc.

Xor (talk)07:29, 20 March 2019
I have a simple precise prediction but I see these kind of things more of a guaranteed improvement rather than a necessity for tuning.
What do you mean by using only firing waves?=)
I might have to move to -1 to 1 GF system rather than the current system with 51 bins.
The problem I had was after tuning against 3 Raiko micro matches I had a 0.9% improvement in TCRM but after 3 matches against 9 different surfers I had a 5% score loss.
Any attributes that help a lot against surfers?
Dsekercioglu (talk)10:30, 20 March 2019

Imo the attributes and weights for random movement and surfers may be completely different, so I just tune them completely separately.

And there are not one single magic attribute that helps a lot against random movement, so do surfers.

And you don’t expect good performance when you use virtual waves for surfers as well, since they are irrelevant.

Xor (talk)12:50, 20 March 2019
 

Firing waves means only waves where there was a real bullet. Against non-adaptive movement the more waves the better, but adaptive opponents will dodge your bullets only so the other waves will give bad information.

For me what did well against adaptive movements is recording data, doing maybe 10 generations of genetic tuning, then re-recording the data.

Make sure to add the adaptive speed to your genetic parameters. You might also want to use parameters people dont surf with, I did some odd things in DrussGT.

But really, the secret to a good score is good movement.

Skilgannon (talk)14:30, 20 March 2019
After Xor said precise intersection I was searching for another meaning in real waves.=)
My fitness function is using the KNNPredictor class in WhiteFang so basically everything is included in the algorithm.
When I actually succeed at making robocode allow more data saving I'll move onto the recursive technic.
"But really, the secret to a good score is good movement." I know but I have been working on movement since 2.2.7.1 and I want to stop my suffering for a while. Maybe genetic algorithm against Simple Targeting strategies and for the flattener?
Edit:
After tuning with three more parameters three things happened:
  • I had my AS gun outperformed my Main Gun against Shadow for the first time
  • I found out that my GA always maximizes K minimizes Divisor(probably I forgot to activate bot width calculations) and minimizes shots taken.
  • Manhattan distance works much better than Squared Euclidean
The random weights started out with 1542 hits.
GA got it to 1923 hits.
I made K 100, Divisor 1 and Decay 0 and hits rose up to 2086.
I used Manhattan distance and it got 2117 hits
Finally when I rolled really high and low values to 10 and 0 it got 2120 hits.
Dsekercioglu (talk)15:03, 20 March 2019

I use a patched version of robocode to allow unlimited data saving only from my data recording bot. Anyway a normal robocode with debug mode on is sufficient to do so, just wish robots in your test bed being free from file writing bugs.

Have you ever tried using k = sqrt(tree size)? This is a common practice when it comes to knn.

Xor (talk)09:23, 21 March 2019
 
 
 
 
Thanks Xor and Skilgannon for their help.
I have collected data from 1005 battles. My GA finally gives some sensible results: Low K, Reverse wall is weighted less, high weight for acceleration etc.
Hopefully I'll get a score higher than 72.0 in TCAS this time.
Dsekercioglu (talk)12:01, 25 March 2019
 

Reproducing the Results

WhiteFang 2.2.7.1 is the best version of WhiteFang but it had some issues and I have been trying to reproduce the results for a long time and as you can understand I couldn't match its performance.
Problem 1
I had a problem with my KNNPredictor class. As the number of K increased or sum of attributes' weights increased it would return bigger numbers which would cause my Simple predictor to have three times more effect than the Standard predictor(Normally it should have half of its effect).
Problem 2
The flattener would log real waves twice and this would decrease the number of data points I could find and weight real waves two times more.
Additional Note: When I fixed the flattener problem my score decreased.
I don't know how to solve it since Simple Formula Standard Formula and Flattener Formula has different attributes and Standard formula and Flattener Formula has 1 / (x * k + 1) type of attributes. Any solutions?
Dsekercioglu (talk)15:19, 7 August 2018

Welcome to the land of Performance Enhancing Bug! When facing things like this, there are two ways to go — leave it, or fully understand how it works and reproduce it!

It seems that you have mostly understood how the bug works, then just tweak your formula to fit the bug in!

Btw, have you ever tried putting two exactly same bot with different version, and see the score difference?

Xor (talk)15:33, 7 August 2018
I will just be reverting back to 2.2.7.1 it with the XOR filter which doesn't log real waves twice and I will use my better firepower formula to have a better score.
I don't think putting exactly the same bot will help because no bots has been change since WhiteFang 2.2.7.1 but you may be right; my Bullet Shielding algorithm may cause extreme deviations in score.
Dsekercioglu (talk)17:32, 8 August 2018

The only purpose of putting the same bot twice with the same environment is to see how much noise there is with your testing method. Experience of improving ScalarBot told me that most of the time a small change in score is just noice; big drop is bug and big increase is new critical feature.

Xor (talk)16:42, 10 August 2018
I will put 2.2.7.1N after 2.4.5; maybe it is just a great noise which I had been trying to pass for a long time =)
Dsekercioglu (talk)19:38, 10 August 2018
 
Oh, I am so dumb. It has nothing to do with my movement; 2.2.7.1 contains this line of code:
                counter++;
                if (counter == 0) {
                    possibleFuturePositions[direction + 1] = (Point2D.Double) predictedPosition.clone();
                }
Consequently it is impossible to have the future position predicted but my prediction system doesn't work so it made my scores lower =)
Anyways, thank you so much for your time and I have actually learnt a lot of things.
Dsekercioglu (talk)22:21, 10 August 2018
 

This is so annoying, the bug I have mentioned above is actually more sensible than "Sometimes predicting the future position correct and sometimes calculating the opposite direction" however, seems like this bug was also a Performance Enhancing bug.

Dsekercioglu (talk)11:01, 11 August 2018
 
 
I have just wondered how you normalize dangers in ScalarBot or since it's not OS what is the general way of doing it? In the latest version of WhiteFang I use
weight * MaximumPossibleDistance / (euclidean_distance + 1) / predictionNum

to have a balance between different weighting schemes and K's.

Dsekercioglu (talk)15:23, 9 August 2018

Just have a look at DrussGT and Diamond, ScalarBot uses similar formula. And most (new) bots imitate this style as well.

Xor (talk)17:13, 10 August 2018

Make sure to use bullet damage and time-till-hit for weighting one wave vs another. Depending on the type of enemy this can make a big difference.

Skilgannon (talk)20:12, 10 August 2018
Thank you for your response. The weighting system I wrote above only affects the bins. Later I divide/multiply those values while choosing the best movement option since WhiteFang has a bin-based KNN Algorithm(The code base was originally designed for Neural Networks).
Actually, I have a logical mistake in my movement but I didn't fix it yet since I wanted a controlled testing environment.
Dsekercioglu (talk)23:16, 10 August 2018
 
 
 
 

Btw, in my opinion, Performance Enhancing Bugs are not bugs. They either fix another bug occasionally or fix a bug in your logic. There must be a reason behind score difference, so just respect the result.

Xor (talk)15:38, 7 August 2018
 
First page
First page
Previous page
Previous page
Last page
Last page