Randomized surfing
Specialized guns can be complex and costly (in terms of development and testing). But it's true that if you had a copy of the enemy robot's movement code and added all its data up exactly the same way it does (during battle) that you would perfectly be able to predict its movement if it doesn't use any kind of random mechanism.
But with even a slight error, the difference will grow exponentially, since essentially robocode robots are very complex systems to which the theories of chaos theory definitely apply.
I, at least, try to minimize the effects of noise, specially when dealing with floating point precision arithmetic. So, chaos theory doesn't apply to my bots. I guess most bots follow the same path.
And, unless someone is trying to flatten virtual waves, it is possible to have perfect information about most deterministic movement systems. Probably all surfers and all active flatteners.
It would be interesting if someone tries to create some kind of "BigBlackBook" bot.
[EDIT: Actually, chaos theory do apply, but additional information gathered in scans done along the battle can compensate for it and still make specialist guns possible.]
Actually, you can't perfectly predict enemy movement even if you have their exact stats and prediction setup because you see their movements a tick later than them. It is probably enough to thrash them pretty thoroughly, but it won't be perfect.
My take on why random surfing hasn't worked out well is because of the same reasons that make a classic Raiko-style random movement not work out as well as surfing: where you currently are affects which locations are reachable. Flattening takes this into account, particularly with 2-wave surfing where you can plan to get to those far-out GFs. Using purely random GF selection will mostly result in GFs you can't reach, or if you try to scale it to where you can reach it will end up being susceptible to statistical analysis from overlaying lots of movements in a classic anti-random-movement style.
My addendum on why random surfing usually hasn't worked out so well also has the following point: In the cases where you scale your random range to where you can reach, there's an additional problem, which is that to my knowledge almost all attempts at such have used a simple goto method or orbit to get to the next GF that was selected, which potentially gives away information about where they were trying to get for that wave earlier than it strictly had to. I suspect that if one adds some smart randomization the timing/velocity/heading during the path used to get to the chosen GF, the result will be blurred as far as the usual statistical analysis is concerned.
Edit for more specific example: If one is using a traditional "orbit until selected GF" method for instance, as soon as you're stopped or begin to decelerate, you've given away exactly where you intend to be for that wave. How early the signal is given away varies depending on how far that GF is from where you ended up on the prior wave, but either way it adds clarity to statistics of where you'll be on future waves. The idea is to add uncertainty to that signal, instead of giving them absolute certainty about where you'll be in a few ticks.
Edit: Sorry, you didn't mean "traditional" surfing, you meant "traditional go-to random GF". :-) But anyway, my latest attempt at random surfing actually randomized the stats that my normal surfing algorithm works from, so it shouldn't have been subject to any tells re: what path I take to get to the "target/random GF".
I have been thinking about random movements, but I was not able to devise an algorithm which have a linear displacement grow with time. Every time I put, roughly speaking, a choice of stop, back, forward. I see that bot perform diffusive/brownian type motion which as math teaches has displacement grows as sqrt of time. So the simplest of all guns: head on has the highest chance of success. From other hand it is very unlikely, that such bot will reach the ends of the guess factor ranges. Which yet again make it easy target due to limited ranges in a GF metric.
My current bot has such randomness buil in, affected by the danger map, yet it still likes to hang in one place, which makes it an easy target to GF guns (I think) and may be even DC guns.
Skipping a turn or being a tick late may make a difference, since an enemy can move up to 8 pixels in that missed tick. Now I'm not so sure of this, but missing that tick could make your bot's recorded(saved) stats off of the actual ones. But again, it's only 8 pixels so it won't be off that much. The stats would be near-perfect, but not absolutely. Again, I'm pretty new to robocode so I'm not so sure.
Also, I may never release Phantom. The movement sounds a lot more complex than I thought. I'm no expert at robocode(meaning I don't have bots like DrussGT, Diamond, Combat, Nene, RougeDC, etc.).
If you make a specialist anti-surfer gun, you could PIT many ticks ahead and still hit with near 100% accuracy.
My theory about why random movements performs poorer than deterministic movements is because no one in the rumble is attempting to exploit the determinism.
Game theory model (already posted somewhere in the wiki):
Suppose there were only 3 GFs to shoot at, forward, center and backward. Suppose there were only 3 GFs to dodge to, forward, center and backward.
If both shooter and dodger choose the same GF, it is a hit and the shooter wins, otherwise it is a miss and the dodger wins. If you shoot at the GF with highest occurrences, which is what all statistical guns do, then the best response for the dodger is to move where there are the least occurrences, which is what all active flatteners do.
Suppose the flattener moves in a pattern like this: forward, backward, center, forward, backward, center... GF guns with data decay will shoot anywhere the 1st time, then forward the 2nd time, backward, center, forward, backward, center, and always miss.
If the flatteners instead used a random strategy and moved forward, backward and center each with 1/3 probability, then the shooter would have 33% hit rate, which is higher than always missing against deterministic movements. In this scenario random movement looks weaker than deterministic movement.
But, if, instead of using past data, the shooter hard-coded the pattern and shoot forward the 1st time, then backward the 2nd time, center, forward... it would always hit. Deterministic movements would give 100% hit rate while random movements would give 33% hit rate. In this more evolved scenario, random movement is stronger than deterministic movement.
The hard-coding counter strategy can also the used by the dodger, which hard-codes the shooter pattern and instead move backward 1st time, forward 2nd time, center 3rd time... and the hard-coded shooter will always miss, again.
This hard-coding of opponents patterns creates a cycle which is only broken if both shooter and dodger instead opt for random strategies. Random movement and random targeting. In this simple game it is 1/3 chance for shooting at each GF and 1/3 chance of dodging at each GF. This way, even if you reverse engineer the opponent, there is no room for improvement.
Rocobode is more complex than the 3 GFs shooter/dodger game, and I guess the most conservative (counter-proof) strategy would be a weighted random movement and weighted random targeting, which takes in account multiple waves and walls.
It is also interesting to note that if you shoot with 1/3 chance at each GF in the simple game, you would achieve only 33% hit rate against SittingDuck. Counter-proof strategies don't exploit opponents weaknesses. And strategies which exploit weaknesses also make themselves vulnerable to exploits.
There's a component you're missing, and that is the limited computational power available. By design, DrussGT doesn't know it's movements far enough beforehand to be able to predict what it will be doing as the currently fired wave hits. Not only that, but anybody attempting to predict DrussGT movements would have to predict many ticks of movements forward *every single tick*, and there isn't enough computational time to do that because just predicting DrussGT for a single tick already takes up a significant amount of the processing quota.
Perhaps for more light-weight bots I could understand how this could happen, but any multi-wave surfer can't be predicted like this because of processing time constraints.
You don't need to do this every single tick, only about every 15 ticks or so. And can spread the processing over many ticks. I didn't say it would be easy, but possible.
By the way, how does DrussGT calculate danger if it doesn't know where it will be when waves hit?
It knows where it will be, but it re-calculates dangers and precise-predictions every time the enemy moves >10% of enemyDistance. It also predicts enemy movements for use in the danger calculation, so the danger calculation changes based on the enemy movements. So you would have to re-calculate DrussGT's precise predictions in your simulations many times if you moved.
I'm thinking of incorporating a random gun and random movement just to future-proof DrussGT, although there are more fun things to do in the mean time =)
@ MN: You could potentially still win with 33%. That hit rate is fairly high against top bots such as DrussGT or Diamond(in my experience). You could still beat sitting duck, since you gain energy every time you hit. And sitting duck does not shoot, so you will still get 100%
You will do well against top bots(assuming you have good movement) with a 33% hitrate. You will do well against sitting duck with 33%. But you may not get such a good score against average bots since 33% is okay, the enemy may have a higher hitrate.