Firing waves means only waves where there was a real bullet. Against non-adaptive movement the more waves the better, but adaptive opponents will dodge your bullets only so the other waves will give bad information.
For me what did well against adaptive movements is recording data, doing maybe 10 generations of genetic tuning, then re-recording the data.
Make sure to add the adaptive speed to your genetic parameters. You might also want to use parameters people dont surf with, I did some odd things in DrussGT.
But really, the secret to a good score is good movement.
- After Xor said precise intersection I was searching for another meaning in real waves.=)
- My fitness function is using the KNNPredictor class in WhiteFang so basically everything is included in the algorithm.
- When I actually succeed at making robocode allow more data saving I'll move onto the recursive technic.
- "But really, the secret to a good score is good movement." I know but I have been working on movement since 18.104.22.168 and I want to stop my suffering for a while. Maybe genetic algorithm against Simple Targeting strategies and for the flattener?
- After tuning with three more parameters three things happened:
- I had my AS gun outperformed my Main Gun against Shadow for the first time
- I found out that my GA always maximizes K minimizes Divisor(probably I forgot to activate bot width calculations) and minimizes shots taken.
- Manhattan distance works much better than Squared Euclidean
- The random weights started out with 1542 hits.
- GA got it to 1923 hits.
- I made K 100, Divisor 1 and Decay 0 and hits rose up to 2086.
- I used Manhattan distance and it got 2117 hits
- Finally when I rolled really high and low values to 10 and 0 it got 2120 hits.
I use a patched version of robocode to allow unlimited data saving only from my data recording bot. Anyway a normal robocode with debug mode on is sufficient to do so, just wish robots in your test bed being free from file writing bugs.
Have you ever tried using k = sqrt(tree size)? This is a common practice when it comes to knn.
- I finally succeeded at increasing the data file quota to 20MB and will probably increase it even more when I turn back to TCRM.
- I'll try the sqrt(treesize), I already have the code and it can be easily added to my algorithm.
- The only problem I have now is that robocode truncates my data files if I finish the battle at max TPS.
- Note: I am saving a double array, an Integer array and a Double Array
20MB is too small. I generally record 2G of data via roborunner, 4 robocodes with 500M each.
I’m not experiencing data truncation. I’m using a worker thread that logs data asynchronously with java.nio FileChannel. However OutputStream API should be enough and you shouldn’t experience data truncation anyway. Where do you do file writing? Did you flush the higher level stream when it’s done? If you don’t do, robocode will close the lower level ones, resulting lost of data.
- "Did you flush the higher level stream when it’s done?" I really don't have any idea about its meaning =(
- How long does a generation take with 2G data? Even When I do not fill the quota a single generation takes about 30 seconds with a population size of 102.
- I use the compressed serialization method in the wiki.
- Edit: Data truncation problem just disappeared after I restarted my computer.
2G of data takes me 5s (4 threads in parallel), which is 1NN with less than 5 attributes which should be lightning fast anyway.
Using all the waves (including virtual ones) and use maxK=100 with a 10+ attributes huge tree takes me less than a minute (still 4 threads in parallel).
I'm using NIO for file reading, and I use handmade serialization instead of the java builtin one, which the secret to speed.
- 5 seconds?? I just started using 4 threads and it takes 11 seconds with 1.4 MB's of data without virtual waves, max K 100 and 102 population size.
- What is your fitness function? Mine perfectly simulates WhiteFang's targeting including bot width calculations. I don't think the 51 bin system slows down the robot since it should just be faster as long as I have K more than 51.
- I convert all the data into ArrayLists so file reading speed shouldn't affect much(Or the memory it takes slows it down?).
It's 1NN with only firing waves. It seems that kd-tree is the only slow part.
Worth mention that I already store everything slow to file, e.g. precise intersection, precise mea etc. So all I do is load those attributes, transform with my formula, load into tree and do kde for every firing wave.
Anyway this can be considered as 1 population and 1 generation, as I'm tuning it by hand yet.
- OK, now I understand. I was afraid that I had a big flaw in the algorithm that made it slow. What I learned is genetic algorithm always works better than manual tuning in the long run. What I do when it ends is to roll the numbers that are really high and low to the max/min values and then I get about a 1% boost in score which easily surpasses the hand tuning. Since only my GA is multi-threaded hand tuning is a little slower too.
- One final question, where do the files I save go on RoboRunner-GUI? I didn't even test(Std Me) before putting WhiteFang against 28 surfers for 10 seasons then I accidentally compiled the project(Me again).
I don’t use GUI. The recorded files are located in each robocode installation, inside regular data file location.
I’m trying GA then. I’m tired tuning anti-random gun by hand, because tuning for one set of data decreases performance against another set of data (1500 battles should be enough, but it’s not the case when all your improvements are below 1% hit rate)
- I have even tried sorting by "last modified" and "last created" but nothing seems to have appeared. I am still getting data files with the development version though. Can't test the packed one because of the bug in 22.214.171.124.
- Edit: .data was a hidden directory. command + shift + . solved all the problems.
Update: after some profiling, it confirmed that kd-tree is the only bottleneck.
However, it seems that file reading time grows as kd-tree time grows.
And after putting deserializaion into separate thread and use some producer-consumer pattern to communicate, total run time stays the same and file reading time decreased greatly. Maybe my profiling tool is yielding inaccurate result.