View source for User talk:Jmb
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
Welcome | 18 | 02:52, 18 June 2012 |
Hi mate. You can tell him he is very welcome to the robocode community :). No offense, i'm just kidding.
Would be nice to hear more about your competition. How is it going with the 1200x1200 field. I think Walls must be good, because he is quite often out of radar range.
Maybe if Skilgannon makes his LiteRumble official we could see the competition as well.
Take Care
Welcome! The work competitions sound fun. I wish I knew some real life Robocoders. =) Good luck with your bots.
Kudos for doing a competition at work!
About 3 years ago I got a copy of the Rumbleserver code and ran a Rumble for my high school CS students as a competition for a 6 week project. On the busiest days, some of the more prolific students were submitting multiple revisions to the rumble.
The passing standard was beating sample.Walls, sample.RamFire and one other sample bot (I forget which) in 1v1 combat. After that, the scores were scaled by their rumble APS. It was tremendous fun. :)
Thanks for the welcome guys. It's great to know there's still a active community years after the game was introduced!
Voidious, I guess it's a privillage knowing other robocoders, hadn't thought about that. Have you considered trying to organise a competition in your city? A group like the local Linux Users Group, or ACM might be interested. I've consider running something like that after this comp is over. I have a feeling I might be hooked on robocode for a while.
Happy to talk more about our competition, let me know if you have any questions.
Here's the latest rankings...
Rank | Robot Name | Total Score | Survival | Surv Bonus | Bullet Dmg | Ram | Dmg * 2 | Ram Bonus | 1sts | 2nds | 3rds | |
1st | MarksRobots.Mbotv1* | 407104 (12%) | 279100 | 19880 | 99602 | 7967 | 462 | 93 | 142 | 81 | 75 | |
2nd | apc.BadWolf* | 380278 (11%) | 256350 | 18060 | 95338 | 8854 | 1619 | 57 | 129 | 61 | 59 | |
3rd | apc.LeeroyJenkins2* | 373103 (11%) | 265800 | 16660 | 82728 | 6402 | 1505 | 9 | 119 | 121 | 58 | |
4th | apc.FaceOfBoe 1.0* | 283546 (8%) | 217900 | 5320 | 55499 | 2345 | 2318 | 164 | 38 | 58 | 65 | |
5th | apc.Colossus2 0.13 | 258344 (7%) | 196950 | 5460 | 50655 | 3553 | 1682 | 45 | 39 | 24 | 59 | |
6th | apc.ShellyBot* | 231230 (7%) | 136400 | 700 | 76411 | 6193 | 10590 | 935 | 5 | 8 | 7 | |
7th | Tim.Maximillian 1.0 | 218011 (6%) | 152950 | 560 | 58376 | 3448 | 2632 | 45 | 4 | 11 | 23 | |
8th | wally.walnut* | 200190 (6%) | 164850 | 980 | 32508 | 1021 | 821 | 10 | 7 | 31 | 34 | |
9th | apc.Walls* | 197329 (6%) | 162850 | 420 | 32364 | 890 | 796 | 10 | 3 | 30 | 42 | |
10th | apc.JarrodDoomedRobot* | 167150 (5%) | 149300 | 1400 | 15339 | 514 | 596 | 0 | 10 | 21 | 25 | |
11th | apc.stratman* | 167081 (5%) | 133450 | 0 | 31216 | 1015 | 1393 | 7 | 0 | 6 | 9 | |
12th | apc.Squirrel* | 164965 (5%) | 134050 | 0 | 29142 | 341 | 1421 | 12 | 0 | 6 | 6 | |
13th | apc.Legin* | 156496 (4%) | 142250 | 0 | 12810 | 82 | 1343 | 11 | 0 | 8 | 11 | |
14th | arp.Gimp 1.0* | 152729 (4%) | 134200 | 280 | 17114 | 314 | 797 | 25 | 2 | 27 | 20 | |
15th | apc.bot42* | 136208 (4%) | 96650 | 280 | 37767 | 871 | 640 | 1 | 2 | 8 | 6 |
I think part of walls success is due to being out of radar range at times, but mostly because he stays out of the fray in the middle of the field. A number of robots have run into issues with their enemy being out of radar range. If your movement code never looks to seek our your opponent he could hide outside of radar range, no one appears to have tried that strategy yet. I noticed when testing my robot last night with Genesis and Diamond, Diamond shot all it's energy into a wall while Genesis sat outside of radar range doing nothing. Naturally I my robot was dead at that stage...
If you look at bot42, you'll notice it has unexpectedly high bullet damage for its survival. It's using a neural network to determine the best targeting strategy to use. He's implemented linear, circular, and pattern matching I believe. If he can improve his movement, I'd expect him to jump up the ranks.
Legin and Squirrel and largely unmodified Sample.Crazy. Wallnut and Gimp are modified Sample.Walls.
Since you are all so helpful, a couple of questions... 1. Has any one attempted to develop a targeting system based on a regression line of previous movements? It is an idea I'm considering... http://en.wikipedia.org/wiki/Polynomial_regression 2. Is there a place I can get an explaination of all the numbers in the darkcanuck roborumble tables? I haven't been able to find it on the wiki and I'm not familar with some of the numbers (such as Glicko-2 etc)
My guess is Rednaxlea (and maybe others) have tried something like that. A quick search turns up oldwiki:Rednaxela/MultiplePlaneRegressionClustering and Talk:Targeting Matrix, but I'm not sure either of them are really the same.
The best guns currently are nothing super fancy in terms of algorithm complexity: k-nearest neighbors to find similar situations, and kernel density among the firing angles (usually GuessFactors) recorded in those situations to choose one. (This is sometimes called Dynamic Clustering in Robocode.) Multi-variate histograms (aka Visit Count Stats) are a close second, and were the dominant strategy for a long time. Darkcanuck has an excellent neural network gun in his bots, too, which he describes a bit at Gaff/Targeting. Of course, there's lots of room for innovation and variation within any of those techniques, too.
I've personally played with various clustering methods, but found nothing that can top simple KNN, and all of which were much slower than KNN. I still think there's room for improvement with clustering algorithms, and probably other totally different algorithms that can compete, as well. WaveSim can be pretty fun if you just want to hack away at targeting algorithms. =) RoboResearch for running massive battles for testing anything else.
Hmm, not sure about the RoboRumble terms being explained anywhere. APS is "Average Percent Score" - score against each bot is (your score / total score), and APS is average of all those scores. Survival is the same, but only counting survival scores. ELO and Glicko are just chess-like rating systems based on those scores. PL is pure win/loss.
Thanks, Rednaxela's idea looks a lot more complicated than what I was thinking of!
Just talked to User:Rjcroy I've been informed that regression line fitting is likely to be highly unstable for anything other than a straight line fit. So probably not worth pursuing, as linear targeting is pretty well understood. It could provide a way to get a best fit on a robot that's oscilating or something, but probably not going to be particularly good. Worth looking into these ideas though, who knows when the next break through will come...
Well, I'd say regression fitting's stability would not be the real important problem with it (though yes that would be an issue with some types of regression). The bigger problem would be that even in the impossibly best case scenario (taking into account all variables, and practically infinite data), you would still end up averaging the movement for any given scenario. Many robots are either randomized or intentionally avoid reacting the same way to a situation as they did in the past. Due to this, the average result for a given set of inputs, frequently does not match the most common result.
For exactly that reason, one feature common to nearly all robocode targeting algorithms that are stronger than simple pattern matching, is that rather than outputting a single firing angle, they output a histogram of probability of how likely different angles are where the enemy is expected to be. (Sorry about the run-on sentence, tis late at night here and my brain doesn't word things nicely at this time)
It's interesting to hear about this competition of coworkers. Have fun! :)
Hi mate. There is a little description of the ratings Darkcanuck/RRServer/Ratings used in Darckcanuck server. Jdev described some of the values here Roborumble results. Maybe it helps a little.
For me the 1200x1200 (or even bigger fields) sound very interesting, i think this could lead to nice run and hide tactics. If you keen for some unusual views, you can just raise the gun cooling rate in the settings and see your robots shooting like crazy. I use this mainly for detecting energy leaks, spotting gun patterns of some enemies or checking how well the wave servers can avoid this. But i could as well think of it as part of a competition of some sort.
Anyway your results look very interesting. ShellyBot has a high score for a ram bot and if you look at the damage of Walls :). Looks like Leeroy has good roots with his high survivability if he get some 1v1 energy management or better 1v1 skills he could raise some % i guess. MarksRobot is well prepared for your competition, damage, survival and some ram are all well.
I wish you luck and fun and maybe you can keep us in the loop about your rumble :)
take care
Better than straight averaging/fitting would be something like RANSAC. It would find trends in the data even if there is a lot of noise.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:User talk:Jmb/Welcome/reply (10).
I looked at some higher-dimensional RANSAC algorithms a while back for my masters, and it seems that RANSAC gets really slow and much less accurate as the dimensions go up. So if you give it a try, keep your dimensions down =)
Hmm, sounds like a job for principal component analysis to me... I'm curious, did you ever consider trying something like that that with it?
No, I never tried it with Robocode. I think using PCA has all the disadvantages of regular regression - it uses all of the noise, which is what RANSAC specifically avoids. Especially in a robocode targeting environment, where the noise is typically greater than 50%.
Hmm, good point. Interestingly, upon doing a google search for PCA and RANSAC in the same context, I found some cases of a generalized PCA being performed using a RANSAC-style algorithm.
Maybe a good approach for dimensionality reduction would be doing offline processing on some large data sets, using a RANSAC-style algortihm to find principal components to use for the online algorithm with the tighter speed requirements...
I'm looking forward to seeing where this idea might go. Are you still thinking of it in a robocode context? I was thinking, even if you correctly identified several movement patterns, would it have any advantages over a pattern matcher?
My thoughts with this actually went more in a process-after-extracting-cluster type algorithm for KNN in order to accurately interpolate what value to shoot at given a set of adjacent-in-n-space values. I think it would be much better than weighting scans by 1/distance or whatever other weighting scheme gets used, as the noise could be eliminated based on location instead of distance and only the trends would be chosen, much like how a histogram allows one to select the highest peak rather than just taking the mean of all the scans. I think it would require fairly large clusters (200 or so points at least), but it could net fairly large gains against the right data patterns.
That way of using is exactly what I was proposing years ago in the last major paragraph of oldwiki:Rednaxela/MultiplePlaneRegressionClustering, though I think you word it much more elegantly than I did :-)
("To use this data of the lines, you simply take every data point, shift it's GuessFactor by however much the formula for the plane it is clustered with says it should be shifted, and then use a kernel density algorithm on these shifted values....")