what's the secret to making a good robot in robocode
In this conversation, users discuss the secrets to making a good robot in Robocode. Various users contribute their thoughts, insights, and advice on the topic. Key points include:
- Understanding the rules of the game, Java technology, and competition mechanics.
- Learning from other robocoders and studying existing strategies.
- Hard work, passion, attention to detail, and not trying to reinvent the wheel.
- Implementing known best practices, fixing bugs, and continuous tweaking and tuning.
- Making assumptions about enemy behavior can be a double-edged sword, as some assumptions can be exploited.
- GuessFactors, a common targeting method, make assumptions about enemy movement, but their effectiveness depends on various factors.
Overall, the users suggest learning from others, following best practices, and carefully considering assumptions when developing a competitive Robocode robot.
what is it please tell me
The secret is to copy other peoples code, since clearly there is no other way to make anything that competes.
- Understanding of the rules of the game, which means Robocode physics and scoring system.
- Understanding of the Java technology, including deep knowledge of Robocode API.
- Understanding of competition mechanics, like game theory.
Looking what other robocoders are doing helps a lot. Most strategies are a combination of the 3 elements above, but starting from where others are now is more efficient than trying to reinvent the wheel and figuring out everything by yourself.
How do you now where others are now? Browse this wiki, there is A LOT of information here. Reverse engineering code from the top bots also helps, but only after you are familiar with most of the concepts.
To quote Skilgannon (slightly out of context):
"I'm good at producing a top scoring bot, I'm not arguing that, but I'm better at optimising code so it runs quickly/small and then just using a feature 10x more than anybody else ever has before. Examples are the 100+ buffers in DrussGT, surf-absolutely-everybody in Neuromancer, multiple-choice pattern matching in Toorkild. We'd already had multiple buffers, melee surfing, micro pattern matching, non-micro multiple-choice pattern matching, so nothing I did was really new, I just squeezed every last drop of performance out that it had to give. I guess my real claim to fame is variable-distance Stop-And-Go, the rest is my interpretation of the wealth of knowledge already available on the wiki =) I guess the moral of the story is to think big!"
There is a lot more to DrussGT than simply a bunch of buffers.
I think multiple buffers are the weakest part of DrussGT. Replacing the bins with k-NN search in movement would make it even stronger.
State of the art go-to surfing, anti-pattern matching, flattener, surf 2 waves, wall smoothing, super strong dynamic clustering gun, data decay, anti-surfer gun, genetic tuning, precise MEA, gun heat waves, super survivalist energy management, bullet shadows...
That quote seems like an example of Survivorship Bias to me: http://en.wikipedia.org/wiki/Survivorship_bias
Just because Skilgannon is the champ doesn't mean his memory of the approach he took is the best approach (or even above average).
Kicking myself for missing the chance to link to a much cooler post about Survivorship Bias. :-)
Hi! I think, that Tomcat is pretty good robot, so i can publish my humble opinion:)
First of all, Skilgannon, imho, is the best robocoder ever, i remember how easy he retake first place in PL, when Tomcat try to get it. And it's impossible for me to understand:) He just from another planet and cannot be understood by regular humans:) So don't try to beat him, until beat other regular humans:)
Secret of others top bots is pretty simple: passion, hard work and attention to details. And don't try reinvent the wheel, until beat other regular humans:) With Tomcat (initially Jdev, later UltraMarine, later Primarch) i spent two years trying to reinvent the wheel and in result fail with it. Then in two months i just implement all known best practises and take 50th place. And after four or five months i take 3rd place. If you will look at Tomcat's history, then find out, that most of changes was small tweaks and bugfixes.
I can give few technical advices:
- Start with implementing of all known best practises
- Fix all bugs in implementation
- Watch at your bot in battle - it is only way to catch many types of bugs
- Visualize your bot's data
- Move by small steps
- Use strong build and test precess with VCS, automatically versioning, batch battles execution etc. So you can say "Bot of version V from revision R with changes Cs, has following results: RES"
And about reinventing the wheel: i think that Tomcat is little reinventer and he still top bot because with PIF-gun and pretty unique movement logs system he is very strong against weak bots, and pretty unpredictable for strong bots. But i'm not master of robocode, so here i may be wrong
So be passionate, work hard and you will find success:) I pleases, that robocode cummunity is still active and here still appears new robocoders
Good luck!
P.S: as usual, sorry for my english
Hey, there're no secret at all: Make It Work (implement best practises) Make It Right (fix bugs) Make It Fast (tweak & tune):)
First thanks to all for the praise I've recieved on the thread =) I don't think I'm superhuman, but I know that I do approach problems from a different perspective than most people.
I don't think I've discussed much is about the process I go through when I make gains. These are typically from three things:
- Adding a new behaviour that has already been shown to improve performance, or creating a new behaviour that might improve performance and then testing it to death over lots of battles to see if it actually helped.
- Fixing bugs. Enough said.
- Speeding up code to make the skipped turn behaviour more predictable, and add spare CPU/memory capacity for future features
So absolutely, I agree with what JDev has said here: make it work, make it right, make it fast.
One other thing that I have to recommend is to not make ANY assumptions about enemy or bot behaviour. If you do make an assumption, do a bunch of tessts and check that the data supports your assumption. And once your feature is implemented, see if there is any way you can get rid of the assumption and thus take advantage of the cases where it isn't true.
About making assumptions of opponents, it can be done, but not in the way many people do it. That's where game theory kicks in.
The best assumption you can make is that opponents are also trying to outperform you. You are not the only one trying to climb the ranking. That said, any assumption you make might be used against you. The higher the ranking of a competitor, the stronger this statement becomes.
Most attempts at exploiting opponents weaknesses also open yourself to weaknesses. Skillgannon's approach of not making assumptions is all about not exposing yourself to weaknesses.
A good example is most bots trying to crush anyone behaving like SittingDuck. The best targeting against SittingDuck is Head-On Targeting. But Head-On Targeting is shamefully predictable. Many bots still do it. Look what happened when EnergyDome exploited that.
High ranking bots make all sorts of assumptions really. Guessfactors that are so commonly used as a way to predict the opponant's movement, contain some (loose) assumptions about how the opponant's movement at least vaguely related to where you fired from. Most targeting systems also make the assumption that the opponant acts symmetrically when going backwards versus forwards. Those two assumptions can be quite easily broken, but they're not easily exploitable because breaking them would not make you more unpredictable to those using the incorrect assumptions, it would merely make you more predictable to those who *don't* make the assumptions.
One just has to distinguish between assumptions which are sufficiently safe, and are not sufficiently safe.
I actually don't feel like GFs make any major assumptions besides clockwise vs counter-clockwise being treated the same. And it would take some actual effort to make that assumption false. The firing angle you use is relative to where you are firing from, regardless of how the enemy moves. And that's the output you need from a targeting algorithm, not the exact location of an enemy.
GF assumes symmetric movement, which is safe. Also assumes orbital movement, which is not as safe, but still hard to exploit.
GF combined with statistical targeting, also assumes non-adaptable movement, which is not safe at all. Exploited by all surfers.
How does it assume orbital movement? If you're not staying perpendicular to me, my GF gun is going to crush you even harder.
In practice, yeah, because the whole point of perpendicularity is to maximize your MEA. But, GF guns are actually assuming that the enemy is orbiting when it visits those GFs. For example, SittingDuck and RamFire both stay at GF0, even though they have very different movements. I'm definitely not saying that this assumption would be easy to exploit, especially with segmentation, but it could be, in theory.
I still disagree that GuessFactors introduce that assumption. A GF is a scalar value representing the whole range of firing angles the enemy could reach. Orbital movement maximizes this range and distributes the values most evenly across it.
How about this: What if I used raw bearing offsets instead of GuessFactors? Do they assume orbital movement?
The way I think of it is... besides the clockwise/counterclockwise thing, the other assumption that GuessFactors make is that the movement profile either stretches with BFT (or more directly MEA) or that BFT is rather constant.
Raw bearing offsets are similar except that they don't apply such a stretching, and instead assume that either BFT is rather constant, or... bots are breaking the physics rules... no that doesn't count... they just rigidly assume BFT is rather constant.
The main innovation of GuessFactors over raw bearing offsets is that it generalized what was assumed as to be less rigid, but it still makes some assumptions. Further, things such as segmentation by BFT or MEA tend to mitigate the effect of this specific assumption.
Basically, I don't think GuessFactors assume anything (besides symmetry) that isn't already true based on the physics of Robocode. Your input to firing a shot at your already chosen bullet power is an absolute angle (modulo your current gun heading). A wave collects what should have been the input to that API call. To me that seems about as close to "making no assumptions" as you can get.
So I'm forced to ask myself: why don't you use GFs in Melee? Doesn't that prove they assume an orbital movement? But I really think it's more that you can assume non-orbital movement in Melee, so it's advantageous to do so (PIF, displacement vectors).
I think of "assume non-orbital movement in Melee" and "assume orbital movement in 1v1" as two sides of the same coin. Either way it's an assumption. You could go somewhere in the middle by having both GF-based and PIF-based (and displacement-vector-based) methods and picking the one that seems to best characterize the current enemy, but even then you still make an assumption: That the assumption of a fixed method is suboptimal often enough that it's worth the penalty of sometimes picking the wrong one ;)
In practice, they are two sides of the same coin, but I still disagree. Only the "assume non-orbital movement" involves extrapolating enemy movements (relative to anything) and translating to and from firing angles. Waves collect very close to the exact value that the Robocode API takes as input to decide if you hit an enemy. That just doesn't feel like adding any assumption that isn't already introduced by Robocode itself.
But on further thought, it's also no excuse to ignore an assumption made by the API. As in, coming up with a model that eliminates an assumption made by the Robocode API would (edit: well, could...) be an improvement to a targeting algorithm. And I'll concede that the Robocode API taking a firing angle as input introduces an assumption of lateral displacement scaling by bullet flight time.
It seems to me that assuming lateral symmetry could be exploited by an opponent. Imagine a bot which started moving clockwise, went to GF .5 . Then for the next wave, its moving counterclockwise, a gun assuming symmetry will fire ahead if it at GF .5 relative to its orbit direction. It is not looking at GF relative to orbit direction, but absolutely in terms of clockwise and counterclockwise. It moves again to absolute GF .5. It continues to always move to absolute GF .5, but predicts enemy firing times and arranges to alternate orbiting clockwise and counterclockwise when they fire. This simple version would probably be much more effective against guns with fast data decay.
Hmm. But if you're already tracking stats from the enemy's perspective, isn't it more effective to just use the same model as they are and avoid the dangerous spots? I don't need any symmetry tricks to know the enemy will shoot at GF 0.5 (from his perspective) and not go there.
I'm curious what MN and Rednaxela have to say. :-) I'm thinking it might be exploitable by a non-surfing movement, but for a surfing movement you are sacrificing more than your'e gaining.
Im not saying my system is a particularly good one, it just shows that using relative GFs should technically be exploitable. What you are saying, knowing enemy will shoot at .5, is basically using a flattener.
I don't know, I think to me the definition of "exploiting" means you can use it to improve your system. Tweaking a random movement to have a non-symmetric profile could be an improvement. With surfing, your goal is to model the enemy's targeting data model, which is done correctly by assuming symmetrical GFs. It seems to me a surfing movement is already aware of the GF symmetry and taking it into account as best it can.
From the perspective of normal (relative to orbit direction) GFs, what it comes down to is that you'd have is a movement which alternates between GF 0.5 and GF -0.5.
It's true like you say that this could have some level of effectiveness against targeting systems that are using firing waves only AND are missing certain segmentation dimensions...
I would say it could be 'exploiting' yes, but I would also guess there are very few targeting systems you'd reliably trick with this. It seems extremely fragile. Your maneuvers to "arrange to alternate orbiting ..." would give away which of GF 0.5 and GF -0.5 you're heading to in certain targeting segments/dimensions (easy near-100% hit rate against it), and even without that, asymmetry in the result of non-firing waves would cause a lean toward one or the other, causing an easy ~50% hit rate.
It would be interesting to see a demonstration of what in practice would be tricked by it though.
I'll give you that it assumes a movement profile that scales with bullet flight time. But that's a very light assumption and not the same as assuming orbital. What types of movement don't scale with BFT? Just fixed patterns?
PS: Mathematically, I think bearing offsets would assume bullet power is constant, not BFT, right? ;)
No, because bearing offsets have no relationship to MEA. I think that raw MEA is the source of the orbit assumption in most guns. The raw MEA is assuming that the enemy can reach all those possible future locations by the time the wave hits, which it can't if it isn't orbiting. Precise MEA pretty much solves this.
Why is that an assumption they will ever go there? It's only taking into account that they could. If they never go there, what's the difference? If your bot has a min GF of -0.1 and a max of 0.3, it wouldn't make my gun any more accurate to change my MEA to match.
If an enemy bot never went outside of the range between -0.1 and 0.3, you're right, it probably wouldn't affect accuracy much. However, say an enemy bot has a huge spike at 1.0, and you decide to fire at it, even though the enemy cannot possibly reach 1.0 with its current heading. The shot would be wasted.
So, yeah, I'm always thinking in terms of precise MEA, and thinking that I will never fire at a GF the enemy never visited. So I don't see how it's assuming anything about how much of the escape range they might actually cover - the MEA is effectively however much of the range they ever cover. There is the scaling, which I think only assumes they're not moving in a fixed pattern.
And you're right that that is a problem with a certain MEA approximation formula some people use with their GuessFactors. :-)
GFs adjust angles on bullet flight time. If it saw GF 1.0 previously with a low powered bullet, when shooting with a high powered bullet, GF assumes the target will continue moving until it reaches the adjusted GF 1.0. This holds true only with orbital movement, or linear movement if close to walls.
If it was a pattern movement, like MyFirstRobot or SpinBot, the assumption would be wrong. But since pattern movement is clearly weaker than orbital movement, it is quite safe to assume opponents are orbiting. Segmenting by bullet power, distance and time-since-direction-change fixes the lack of accuracy in the long run.
But if you are trying to maximize APS by crushing weak movements, then PIT might be better in some situations.
If the enemy pays no attention to where I am and just moves randomly, the lateral displacement will still scale by BFT and GFs will still make sense, unless their algorithm is hard-coded to only move a fixed distance. That movement isn't orbital. Am I wrong?
Ok, any movement which scales with BFT works with GF. But which movements are these?
- Orbital movement
- Random movement normalized to move to all reachable angles (random orbital movement)
It doesn't work as well on other movements(slower learning):
- Pattern movement (including linear movement and oscillators)
- Random movements, which change heading/velocity based on anything not related to MEA
In practice, GF works against all movements above, but thanks to segmentation. Segmenting by distance and bullet power fixes most assumptions in GF which happen to be wrong in the long run.
Assuming wrongly that a movement is orbital is not that bad. Movements which do not try to maximize MEA are more predictable in the long run. If it is orbital, GF is right from the beginning and do well. If it is not orbital, then any statistical gun will have a higher hit rate in the long run.
And a worst case scenario. What if someone uses a pattern movement in the beginning and switches to orbital later on? DrussGT is doing this since it incorporated EnergyDome's bullet shielding.