View source for Talk:WhiteFang

From Robowiki
Jump to navigation Jump to search

Contents

Thread titleRepliesLast modified
Anti-Surfer Targeting1811:01, 25 March 2019
Reproducing the Results1110:01, 11 August 2018
Test Bed211:12, 10 July 2018
Possible errors722:31, 29 December 2017
Micro Ant114:49, 23 December 2017

Anti-Surfer Targeting

I have been tuning WhiteFang's gun lately and while I could get a lot of success in TCRM with genetic algorithms my AS targeting isn't improving a bit.
Thinking logically finding weights with genetic algorithm shouldn't be as effective against learning movement as they change how they move but I had though it would do better than some randomly assigned weights.
Currently the best score I have got is 69.36 with the current AS targeting which is just the main gun with 10 times the decay and lower K.
Any advice about how to tune it?
Dsekercioglu (talk)10:47, 19 March 2019

I found recorded data, like WaveSim, still quite useful when it comes to hit adaptive movement.

The reason maybe anti-intuitive, but as long as their movement & your gun can be considered random, it’s still logical.

And although intuitively wave surfing is not random, in fact, they are. The reason is simple, non-linearity & self feedback.

They are only not random with specific information, but certainly you can’t have them for everyone.

And about genetic tuning, I think they are useful, but only after you get basic stuff right, e.g. precise intersection, using only firing waves etc.

Xor (talk)06:29, 20 March 2019
I have a simple precise prediction but I see these kind of things more of a guaranteed improvement rather than a necessity for tuning.
What do you mean by using only firing waves?=)
I might have to move to -1 to 1 GF system rather than the current system with 51 bins.
The problem I had was after tuning against 3 Raiko micro matches I had a 0.9% improvement in TCRM but after 3 matches against 9 different surfers I had a 5% score loss.
Any attributes that help a lot against surfers?
Dsekercioglu (talk)09:30, 20 March 2019

Imo the attributes and weights for random movement and surfers may be completely different, so I just tune them completely separately.

And there are not one single magic attribute that helps a lot against random movement, so do surfers.

And you don’t expect good performance when you use virtual waves for surfers as well, since they are irrelevant.

Xor (talk)11:50, 20 March 2019
 

Firing waves means only waves where there was a real bullet. Against non-adaptive movement the more waves the better, but adaptive opponents will dodge your bullets only so the other waves will give bad information.

For me what did well against adaptive movements is recording data, doing maybe 10 generations of genetic tuning, then re-recording the data.

Make sure to add the adaptive speed to your genetic parameters. You might also want to use parameters people dont surf with, I did some odd things in DrussGT.

But really, the secret to a good score is good movement.

Skilgannon (talk)13:30, 20 March 2019
After Xor said precise intersection I was searching for another meaning in real waves.=)
My fitness function is using the KNNPredictor class in WhiteFang so basically everything is included in the algorithm.
When I actually succeed at making robocode allow more data saving I'll move onto the recursive technic.
"But really, the secret to a good score is good movement." I know but I have been working on movement since 2.2.7.1 and I want to stop my suffering for a while. Maybe genetic algorithm against Simple Targeting strategies and for the flattener?
Edit:
After tuning with three more parameters three things happened:
  • I had my AS gun outperformed my Main Gun against Shadow for the first time
  • I found out that my GA always maximizes K minimizes Divisor(probably I forgot to activate bot width calculations) and minimizes shots taken.
  • Manhattan distance works much better than Squared Euclidean
The random weights started out with 1542 hits.
GA got it to 1923 hits.
I made K 100, Divisor 1 and Decay 0 and hits rose up to 2086.
I used Manhattan distance and it got 2117 hits
Finally when I rolled really high and low values to 10 and 0 it got 2120 hits.
Dsekercioglu (talk)14:03, 20 March 2019

I use a patched version of robocode to allow unlimited data saving only from my data recording bot. Anyway a normal robocode with debug mode on is sufficient to do so, just wish robots in your test bed being free from file writing bugs.

Have you ever tried using k = sqrt(tree size)? This is a common practice when it comes to knn.

Xor (talk)08:23, 21 March 2019
 
 
 
 
Thanks Xor and Skilgannon for their help.
I have collected data from 1005 battles. My GA finally gives some sensible results: Low K, Reverse wall is weighted less, high weight for acceleration etc.
Hopefully I'll get a score higher than 72.0 in TCAS this time.
Dsekercioglu (talk)11:01, 25 March 2019
 

Reproducing the Results

WhiteFang 2.2.7.1 is the best version of WhiteFang but it had some issues and I have been trying to reproduce the results for a long time and as you can understand I couldn't match its performance.
Problem 1
I had a problem with my KNNPredictor class. As the number of K increased or sum of attributes' weights increased it would return bigger numbers which would cause my Simple predictor to have three times more effect than the Standard predictor(Normally it should have half of its effect).
Problem 2
The flattener would log real waves twice and this would decrease the number of data points I could find and weight real waves two times more.
Additional Note: When I fixed the flattener problem my score decreased.
I don't know how to solve it since Simple Formula Standard Formula and Flattener Formula has different attributes and Standard formula and Flattener Formula has 1 / (x * k + 1) type of attributes. Any solutions?
Dsekercioglu (talk)14:19, 7 August 2018

Welcome to the land of Performance Enhancing Bug! When facing things like this, there are two ways to go — leave it, or fully understand how it works and reproduce it!

It seems that you have mostly understood how the bug works, then just tweak your formula to fit the bug in!

Btw, have you ever tried putting two exactly same bot with different version, and see the score difference?

Xor (talk)14:33, 7 August 2018
I will just be reverting back to 2.2.7.1 it with the XOR filter which doesn't log real waves twice and I will use my better firepower formula to have a better score.
I don't think putting exactly the same bot will help because no bots has been change since WhiteFang 2.2.7.1 but you may be right; my Bullet Shielding algorithm may cause extreme deviations in score.
Dsekercioglu (talk)16:32, 8 August 2018

The only purpose of putting the same bot twice with the same environment is to see how much noise there is with your testing method. Experience of improving ScalarBot told me that most of the time a small change in score is just noice; big drop is bug and big increase is new critical feature.

Xor (talk)15:42, 10 August 2018
I will put 2.2.7.1N after 2.4.5; maybe it is just a great noise which I had been trying to pass for a long time =)
Dsekercioglu (talk)18:38, 10 August 2018
 
Oh, I am so dumb. It has nothing to do with my movement; 2.2.7.1 contains this line of code:
                counter++;
                if (counter == 0) {
                    possibleFuturePositions[direction + 1] = (Point2D.Double) predictedPosition.clone();
                }
Consequently it is impossible to have the future position predicted but my prediction system doesn't work so it made my scores lower =)
Anyways, thank you so much for your time and I have actually learnt a lot of things.
Dsekercioglu (talk)21:21, 10 August 2018
 

This is so annoying, the bug I have mentioned above is actually more sensible than "Sometimes predicting the future position correct and sometimes calculating the opposite direction" however, seems like this bug was also a Performance Enhancing bug.

Dsekercioglu (talk)10:01, 11 August 2018
 
 
I have just wondered how you normalize dangers in ScalarBot or since it's not OS what is the general way of doing it? In the latest version of WhiteFang I use
weight * MaximumPossibleDistance / (euclidean_distance + 1) / predictionNum

to have a balance between different weighting schemes and K's.

Dsekercioglu (talk)14:23, 9 August 2018

Just have a look at DrussGT and Diamond, ScalarBot uses similar formula. And most (new) bots imitate this style as well.

Xor (talk)16:13, 10 August 2018

Make sure to use bullet damage and time-till-hit for weighting one wave vs another. Depending on the type of enemy this can make a big difference.

Skilgannon (talk)19:12, 10 August 2018
Thank you for your response. The weighting system I wrote above only affects the bins. Later I divide/multiply those values while choosing the best movement option since WhiteFang has a bin-based KNN Algorithm(The code base was originally designed for Neural Networks).
Actually, I have a logical mistake in my movement but I didn't fix it yet since I wanted a controlled testing environment.
Dsekercioglu (talk)22:16, 10 August 2018
 
 
 
 

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:Talk:WhiteFang/Reproducing the Results/reply (2).

 

I got RoboJogger working about 2 days ago and understood that challenges are not enough to improve WhiteFang. Since I don't have any experience about choosing robots for Test Beds I wanted to ask: How should I choose the test bed that will give me close to rumble scores?

Dsekercioglu (talk)12:38, 9 July 2018

I always got the best improvement from finding specific problem bots (look at the KNNPBI) and trying to design a specific feature that would help against the kind of behaviour they showed. Usually it involves watching a lot of battles. Test beds are only to make sure that nothing is broken against other bots when this is happening.

It is really about the size of the testbed you want. Best would be the whole rumble. Minimum is probably something that shoots HOT, something linear, something simple VCS, something simple WS + VCS, some PM, and a tough top-bot or two.

And from what I've found, fixing bugs almost always gets better results than adding features. So make sure you don't have bugs, and don't have any bad assumptions.

Skilgannon (talk)22:00, 9 July 2018
Thank you for your response about the test bed, I agree with the bugs part. I jumped from 28 to 26 with Bullet Shielding(2 days of coding) and 26 to 23 with a bug fix.(Stop position calculation)
I just realized I had been calculating wave locations wrong since 10 months. I think I will add "First robot to enter the top 30 with wrong wave calculations" =)
Dsekercioglu (talk)11:12, 10 July 2018
 
 

Possible errors

I still see low scores such as %5 against some bots and it seems like it isn't about the crowd part. I tried to fix it two times and it didn't work. I also tried to reproduce the error but it doesn't occur in my robocode 1.9.3.0. I didn't change anything but the normalised crowd thing. I would see if there was any IndexOutOfBounds Exception or Arithmetic Exception.

Dsekercioglu (talk)15:59, 24 December 2017

Don’t worry, ScalarBot is having the same bug, e.g. it scores near 0% against some opponents, which is never reproduced even after thousands of rounds.

Anyway, if that happens too much, you can try to catch every exception and log it to file, then look up the low scored opponents in the log. ScalarBot fixed a really rare bug in kd-tree by doing so.

Anyway, it still scores very low against some bot randomly, without throwing any exceptions,

Xor (talk)02:34, 25 December 2017

I think I should worry. In my tests it does better than my normal gun but in the rumble it has about -3 APS. May it be the bug fixed with 1.9.3.0?

Dsekercioglu (talk)07:12, 25 December 2017
 
java.lang.ArrayIndexOutOfBoundsException: 50
	at dsekercioglu.knn.knnCore.ags.kdtree.KdNode.addLeafPoint(KdNode.java:70)
	at dsekercioglu.knn.knnCore.ags.kdtree.KdNode.addPoint(KdNode.java:63)
	at dsekercioglu.knn.knnCore.ags.kdtree.KdTree.addPoint(KdTree.java:7)
	at dsekercioglu.knn.knnCore.KNNPredictor.addData(KNNPredictor.java:67)
	at dsekercioglu.knn.wfGun.gun.TestGun.wavePassed(TestGun.java:24)
	at dsekercioglu.knn.wfGun.Fang.updateWaves(Fang.java:122)
	at dsekercioglu.knn.wfGun.Fang.onScannedRobot(Fang.java:80)
	at dsekercioglu.knn.WhiteFang.onScannedRobot(WhiteFang.java:44)
	at robocode.ScannedRobotEvent.dispatch(ScannedRobotEvent.java:315)
	at robocode.Event$HiddenEventHelper.dispatch(Event.java:259)
	at net.sf.robocode.security.HiddenAccess.dispatch(HiddenAccess.java:191)
	at net.sf.robocode.host.events.EventManager.dispatch(EventManager.java:422)
	at net.sf.robocode.host.events.EventManager.processEvents(EventManager.java:376)
	at net.sf.robocode.host.proxies.BasicRobotProxy.executeImpl(BasicRobotProxy.java:423)
	at net.sf.robocode.host.proxies.BasicRobotProxy.execute(BasicRobotProxy.java:122)
	at net.sf.robocode.host.proxies.StandardRobotProxy.turnRadar(StandardRobotProxy.java:55)
	at robocode._AdvancedRadiansRobot.turnRadarRightRadians(_AdvancedRadiansRobot.java:150)
	at robocode.AdvancedRobot.turnRadarRightRadians(AdvancedRobot.java:1962)
	at dsekercioglu.knn.wfEyes.Lock.run(Lock.java:17)
	at dsekercioglu.knn.WhiteFang.run(WhiteFang.java:39)
	at net.sf.robocode.host.proxies.HostingRobotProxy.callUserCode(HostingRobotProxy.java:274)
	at net.sf.robocode.host.proxies.HostingRobotProxy.run(HostingRobotProxy.java:221)
	at net.sf.robocode.host.proxies.BasicRobotProxy.run(BasicRobotProxy.java:44)
	at java.lang.Thread.run(Thread.java:745)

I got this exception after some testing.

Dsekercioglu (talk)20:13, 25 December 2017

This is a bug in Rednaxela’s kd-tree. I created a PR to fix it in his bitbucket a few years ago, but no response.

IIRC, This bug happens when points are so concentrated that spliting happens more than once in one call, which is not considered at all. That’s why it happens so rarely, and only with some set of dimensions.

Xor (talk)21:44, 25 December 2017

Feel free to use my Kd-Tree, it has protections against infinite splitting and is similar performance to Rednaxela (perhaps even better in mixed workloads due to cache locality).

Skilgannon (talk)19:56, 26 December 2017

Sorry for answering late; I actually wrote an answer but I suppose there was a problem with Wi-fi. WhiteFang has already started to use it and there is also range search in your tree which is wonderful.

Dsekercioglu (talk)13:41, 29 December 2017
 
 
 
 
 

Hi,

WhiteFang's score against Ant seems a bit abnormal. I would expect it to be reversed.

Fighting battle 15 ... zyx.micro.Ant 1.1,dsekercioglu.knn.WhiteFang 1.5 RESULT = zyx.micro.Ant 1.1 wins 5540 to 726

It has weaknesses against PM but not this much.

Dsekercioglu (talk)13:32, 23 December 2017

Sorry, I understood why it happened. I assigned 0 values to scores and they were updated with waves(A form of crowd targeting). However, when the distance was high it would give an Arithmetic exception(I haven't seen it yet) if the distance was too high.

Dsekercioglu (talk)14:49, 23 December 2017