Thread history

Fragment of a discussion from Talk:WhiteFang
Viewing a history listing
Jump to navigation Jump to search
Time User Activity Comment
No results
I finally succeeded at increasing the data file quota to 20MB and will probably increase it even more when I turn back to TCRM.
I'll try the sqrt(treesize), I already have the code and it can be easily added to my algorithm.
The only problem I have now is that robocode truncates my data files if I finish the battle at max TPS.
Note: I am saving a double[] array, an Integer array and a Double Array
Dsekercioglu (talk)10:19, 21 March 2019

20MB is too small. I generally record 2G of data via roborunner, 4 robocodes with 500M each.

I’m not experiencing data truncation. I’m using a worker thread that logs data asynchronously with java.nio FileChannel. However OutputStream API should be enough and you shouldn’t experience data truncation anyway. Where do you do file writing? Did you flush the higher level stream when it’s done? If you don’t do, robocode will close the lower level ones, resulting lost of data.

Xor (talk)03:20, 22 March 2019
"Did you flush the higher level stream when it’s done?" I really don't have any idea about its meaning =(
How long does a generation take with 2G data? Even When I do not fill the quota a single generation takes about 30 seconds with a population size of 102.
I use the compressed serialization method in the wiki.
Edit: Data truncation problem just disappeared after I restarted my computer.
Dsekercioglu (talk)12:18, 22 March 2019

2G of data takes me 5s (4 threads in parallel), which is 1NN with less than 5 attributes which should be lightning fast anyway.

Using all the waves (including virtual ones) and use maxK=100 with a 10+ attributes huge tree takes me less than a minute (still 4 threads in parallel).

I'm using NIO for file reading, and I use handmade serialization instead of the java builtin one, which the secret to speed.

Xor (talk)17:05, 22 March 2019
5 seconds?? I just started using 4 threads and it takes 11 seconds with 1.4 MB's of data without virtual waves, max K 100 and 102 population size.
What is your fitness function? Mine perfectly simulates WhiteFang's targeting including bot width calculations. I don't think the 51 bin system slows down the robot since it should just be faster as long as I have K more than 51.
I convert all the data into ArrayLists so file reading speed shouldn't affect much(Or the memory it takes slows it down?).
Dsekercioglu (talk)18:50, 22 March 2019

It's 1NN with only firing waves. It seems that kd-tree is the only slow part.

Worth mention that I already store everything slow to file, e.g. precise intersection, precise mea etc. So all I do is load those attributes, transform with my formula, load into tree and do kde for every firing wave.

Anyway this can be considered as 1 population and 1 generation, as I'm tuning it by hand yet.

Xor (talk)02:39, 23 March 2019

flush means, when all data are written, call DataOutputStream.close() yourself.

Xor (talk)17:06, 22 March 2019