how to build a good test bed?
Movement I find much more interesting - I think there is still a lot of unexplored potential here. Targeting can only get as good as the ML system though. The only tricks I see from targeting side involve bullet shielding and bullet power optimization.
For surfers I evolved the weights in multiple steps - record data, tune weights, re-record data, retune weights etc. I agree fixed data isn't ideal against learning movements, but it seemed to work ok.
By recorded battles, I actually just recorded the ML style interactions. So the only work to do in the genetic algorithm was parse input line, add to tree, and if it was a firing tick then do KNN + kernel density and N ticks later check if the prediction was in the correct bounds.
About 15 minutes per generation for an i5-2410M using 4 threads.
So only record gun waves seems ok? And IMO the gun prediction of each wave can be evaluated immediately, since the result is already known. btw, are you optimizing hit rate overall (e.g. total hit / total fire of all battles) or robocode score? (e.g. average bullet damage per battle). I think the lattar should be better when bullet power selection is also evaluated (or when it is not disabled). But since in real battles hit/miss will also affect total waves per round, that would be inaccurate for recorded battles. So how do you deal with bullet power? imo using the recorded ones sound reasonable, although not perfect.
The difference between evaluating overall hit rate and average bullet damage per battle is interesting. Seems that the latter will weight on damage per bullet. Also when comparing average hit rate per battle with overall hitrate, the former will weight battles on bullets fired per battle.
I optimized for hit rate. Bullet power was kept the same as when it was recorded.
And I saved/loaded all waves (for learning), but only did prediction using firing waves.
So... each of those generations was evolved against those 5000 battles, right? What was the size of your population? I've tried my hands at genetic tuning some time ago but I gave up because it seems my evolving step was too slow. I'm wondering what was your population size when you got those 15 minutes, because one generation with 150 battles for me take waay more than that :/ I'll need some reference to optimize my targeting system.
From memory, population size was about 20. It was something between a gradient descent and a genetic algorithm, by moving from the stronger members away from the weaker members, plus some random component. Remember, I had already extracted all of the features etc, and saved them just before inserting into the Kd-Tree, so the only thing I needed at evaluation time was:
- read data from file
- add points to the tree
- KNN/KDE
- count inliers vs outliers -> give a score
Then at the end multiply the evolved weights with the code weights, recompile, and collect a new set of data; repeat until happy.
Just to clarify, maybe not in the classic way, but your algo is more a elitist than a mutator, is that it?
I'm doing exactly that, just ran a generation of pop. size 30 on top of 155 battles from top bots of the Rumble and it took me about 15 minutes. I'll debug what is taking this much time later. Thanks for your help :)
Make sure you are only simulating aiming on waves that you actually fired.
I think the closest would be something between gradient descent and stochastic learning.
Still not quite, because it uses a population like GA does, and used linear combinations between the population to estimate gradient similarly to how gradient descent would. Honestly, there were probably better/faster algorithms that would have worked better out-the-box, but this worked fine, it just took a bit longer.
Well, this combination sounds great, and it is more like how I'm tuning weights by hand than traditional GAs. And this way it should work way better than hand, as it's running way more battles with way more population.
And it's way faster (and also with less deviation) with recorded battles. The only problem is overfitting the recorded battles, but that should be solved well with many tune–rerecord iterations.
Anyway, I'm still wondering about — will it forget the previous tune–rerecord iterations to overfit new iterations? Anyway, since it sounds more like metric learning, it won't surprise me if this one is different. Did you experiment rerunning the old battles after tuning for newer ones to see that?
I’m doing nearly the same thing now. I write knn data points and gfs to files, so all I do is just:
read data from file; add to tree; knn/kde; count inliers vs outliers. and I’m only doing knn/kde on firing waves.
However it takes me ~10min per generation with only 1500 tcrm battles.
My population size is also 20, and I’m also using 4 threads. It’s Core i7 with 4 cores at 2.6Ghz, so it should be even faster than i5-2410M which has only 2 cores.
Are you reading data and adding to tree at the same time, or reading data to memory in one go and adding to tree then?
It was read a line, add to tree, and if it was a firing tick do a prediction. For parallelization I just started a new thread for each bot, and join the thread when the bot is processed. It would probably br a bit faster with a thread pool.
Unfortunately I think I lost this code, I think it was on my University computer...
I'm even using thread pool & nio for potentially faster execution. Maybe 5000 roborumble battles should not take 3x time as 1500 tcrm battles, as the rumble contains a lot of easy targets which get destroyed in a second. I'll experiment later.
Btw, my crossover code is not simply doing some gradient descent, but rather do gradient descent or use the weight from one parent directly, based on some random process. Random noise is also added on small probability though. I think this process explores more possible searching space than simply gradient descent + random component. As my experience, the searching space of knn weights is non-trivial, although some pattern exists for most good weights.
one more question: how many generations are you generally using?
For me, 10 generations produces result good enough, and increasing it further to 100 doesn’t improve much.
However, it seems that 1500 tcrm battles suffers from overfitting a lot, and I’m trying full rumble now.
Each time I collect data & do genetic tuning with 1500 tcrm battles, the hit rate increases from ~16% to ~17%, however actual tcrm score even decreases sometimes.
It depended on the population size and the sampling strategy I used. If I used larger population and less converging sampling strategy then I could run up to about 100 generations before it would converge.
And I think the solution space is very non-convex with lots of local minima, I ran quite a few simulations and it converged to different solutions each time.