?????
This is impossible. I've watched that bot in awe for years.
Have you tried running it locally? Those results don't look suspicious to me, especially from 2 different clients. And assuming they're legit, congrats. :-)
My clients are legit as far as I know. The only difference with the standard 1.7.3.2 clients from sourceforge is I run them with max OS priority and dedicated cores to minimize false turn skipping (and to let me use my machine while the clients are running).
There are still only 3 battles, with results above and below 50%. Maybe the average will shift with more battles, like 15. You never know the final average score against top bots until you fight them. Yet, scoring above 50% at least once against those bots is not trivial.
I've yet to try it locally here but that doesn't look so impossible to me. In particular, I'd note that Aristocles' movement you're currently using is a "flat profile" type movement, so you would tend to expect it to score well against high-rank bots compared to where it's own rank is. If the results are accurate, congrats indeed :)
Oh, and I'd note that further evidence you have some rather nice targeting on there, is that DeBroglie scores around 50% against PolishedRuby, which is a mirror movement bot with my RougeDC targeting. RougeDC and Scarlet score around 56% against PolishedRuby, so this would certainly imply that against a "flat movement" like Aristocles, your targeting isn't too far behind RougeDC/Scarlet... Looks to me like your targeting is on the right track ;)
Oh, and testing locally once this was the result:
Rank | Robot Name | Total Score | Survival | Surv Bonus | Bullet Dmg | Bullet Bonus | Ram Dmg * 2 | Ram Bonus | 1sts | 2nds | 3rds |
1st | pez.rumble.CassiusClay 2rho.02no | 2677 (50%) | 950 | 190 | 1360 | 177 | 0 | 0 | 19 | 16 | 0 |
2nd | tjk.deBroglie rev0025 | 2654 (50%) | 800 | 160 | 1514 | 180 | 0 | 0 | 16 | 19 | 0 |
Thanks for the notes!
It's been a real help developing targeting off of a known movement. I compare DeBroglie to Aristocles as a guide for how my gun stacks up. Any variation is due to the difference in guns. Presumably the difference in the radar handling is miniscule at best, after all. Comparing PBI between the two bots has been very instructive!
I've just about finished the main work on targeting.. the point where I stop adding features and just tweak/debug.
It's been so useful to develop this way that I've been tempted to make a DeBroglie-M with my movement and Aristocles' gun so that I can debug movement issues in a similar way... and finally combine my movement and gun when I think they're both in a workable state.
I know many of the bots have a base class which calls the movement and gun separately - Diamond is the first that springs to mind, although I think it was actually CassiusClay which pioneered this structure.
And just a note, Aristocles doesn't use waves in the movement, only in the gun. The simplest wavesurfing movement would be that of BasicSurfer, although to get something reasonable you probably need to go with something like Voidious's Komarious or my CunobelinDC.
I actually think you're better off just freezing your gun changes at that point to compare the movement to your last version with Aristocles movement. There could be cases where tuning to Aristocles' gun would not be the same as tuning to your own gun. As long as you're comparing to a fixed gun, changes can be attributed to the movement. Just my 2 cents. :-)
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:Talk:DeBroglie/?????/reply (7).
Keep favoring readability over performance or you will loose control of the code in a few months. (unless you are writing a restricted codesize bot)
You will only start stumbling into skipped turns if you start analysing huge amounts of log data, or do some deep precise prediction. Even then, if you keep the heavy processing code encapsulated, only few parts of code will need optimization.
I still use plain java.util.ArrayList in k-NN search and it works. Rednaxela´s kd-tree is incompatible with my normalized euclidean distance search. :(
It's a bit off topic, but I'm curious, in what way is the normalized euclidean distance search incompatible? I have a "DistanceFunction" interface that allows other distance metrics to be implemented. It should even be just fine with the scaling of different dimensions changing over time (it just makes the tree splits slightly less optimal, but not a big deal generally).
Is it fine to call setWeights() before getting a nearest neighbors list, then using setWeights() again to return to the original weighting?
I was hoping to implement an anti-surfer gun with custom dimension weights.. but just to use those weights at getKNN time.
Yep, that should work just fine.
(Note, the newer version of the tree doesn't have the "setWeights()" thing but it would be trivial to include inside a "DistanceFunction" implementation.)
Thanks for the info! Replacing the wiki version of your tree for the bitbucket one is one of the very next things I'm doing. I need that iterator now, because I'm occasionally culling too many neighbors via displacement vector, leaving me with too-sparse data. The ability to fetch more will be invaluable!
A faster way of doing this would probably be to have two different trees, one with weights of one type and another with weights of another type. This will ensure that your tree splits are more optimal, and you can use completely different dimensions as well without a slowdown.
Interesting idea! If I can spare the cycles, that would be interesting to do. Are there segmentations that are useful against adaptive movements but utterly superfluous against non-adaptive movements? (other than data age?) Vice versa?
I'm unsure if that would be faster really. Since these trees are generated on-the-fly without rebalancing, it's not like they're hugely optimal in the first place. I doubt the overhead of changing the weighting would be much compared to how non-optimal it is in the first place. Additionally two trees could be worse from the perspective of on-CPU caches. IMO it's worth trying both ways and benchmarking.
If balancing the tree means better performance, would the end of a round (after death or victory) be a good time to balance the tree, or would it even make a difference in terms of performance during the match? I suppose it all depends on how hard one's bot is pushing up against the edge of the time slice it's allowed.
The end of the round probably would be a good time yes, though I've never really tried it. The re-balancing would take longer each round though, so it may make sense to only do that for the first few rounds of battle. As the number of rounds increases there would be diminishing returns to constantly re-balancing anyway.
The first few rounds makes sense.. Relatively quickly your dimension balancing should conform to the movement pattern of the enemy, and later rounds probably wouldn't shift the balance dramatically.
The distance dimension of a bot that heavily prefers a certain orbit distances will get properly split after just a few re-balancing operations, making further balancing on that dimension a comparative waste of time I would imagine.
I´m using normalized Euclidean distance (weight = 1/stddev) with fast sampled deviation, which do change over time (k-NN for lazy people who don´t fine tune their bots).
I had the impression the changing weights were not being accounted for. My bot´s scoring dramatically decreased with the kd-tree. I suspect it is the cached coordinate limits inside each node. Or maybe I screwed up something while trying to use the library.
Changing weights is certainly intended to be accounted for, though I hadn't tested it as much as other features. I know it couldn't be the coordinate limits inside each node, because they are copying the min/max values from the points which don't change with weighting. Do you remember which version of the tree were you trying this with?
Usually it's just a matter of optimizing the code and algorithms, but some sort of locking system could be interesting. I'm unsure if such locking would help much though since at least in my experience, a high quality surfing movement takes a much larger amount of cpu than just about any targeting out there.
For what it's worth, adding simple guns to a virtual gun system which has a dynamic clustering, precise intersection gun is not going to give you any more points. The only thing which might help is a circular gun, and even then just against SpinBot.
If you design your algorithm taking speed into account to begin with, it shouldn't be a problem. Be careful of nested loops, and use FastTrig and you should be fine. The biggest speed increases for me came when I figured out ways I could keep a value for later, instead of re-calculating it, particularly things like square roots (distances) and KNN results from Dynamic Clustering. If you can eliminate whole branches of code from being re-executed it can also be a major time-saver, such as exiting a search early if you know that all points from here on will be infeasible.
I do disagree slightly with that first point. RougeDC's gun system, which is primarily k-nn precise intersection targeting, did benefit slightly from the inclusion of a "Single Tick" PM targeting system. Switching between it and the main targeting system seems to pay off slightly against some surfers the targeting would otherwise have more trouble with. In general though, I do agree that simple guns in a virtual gun system won't gain all that much.
I wouldn't exactly term a Single-Tick PM gun as 'simple' ;-) I was more thinking along the line of HOT, linear, circular, random linear and averaged linear. Single-Tick is still something I need to play with... possibly Single-Tick KNN?
Well... my current opinion is that the single-tick thing just wastes way too much CPU to be worth it. Maybe it would be worth trying a similar concept with a larger fixed number of ticks though... like predicting forward in 10-tick intervals...
The simple targeter I really wanted most was a dead-stop.. where would the bot end up if it stopped as soon as it saw the wave, using deceleration, etc.
I've even pondered making dead-stop the 0.0 center of my guess factors instead of HOT, in which case the separate dead-stop targeter becomes superfluous. Wasn't sure if it'd yield anything significant, but it bugs me that 1.0 and -1.0 have a precise physical/decision meaning when you do a good MEA calculation, but that 0.0 wasn't significant of any enemy decision, whereas dead-stop would be.
Random Targeting does have its uses. A pure random gun usually improves the score against very strong movement. And hurts the score against everything else.
As for "readability" vs "efficiency", I don't think they are necessarily at odds, besides as they compete for your time and effort. Like Skilgannon said, the most important optimizations are high level stuff - fast code can still be readable, testable, and maintainable. Having everything in one giant method with big ugly lines is not necessarily going to give a huge performance increase.
I think it just so happens that some bots, like DrussGT and Shadow, went through a lot of changes without ever getting sufficiently refactored, so the code is a little disorganized. But the authors made the effort to keep them fast. ;) (In Shadow's case, I'm just going off what ABC has said...) I feel Diamond is decent code and pretty fast. Rednaxela's kd-tree is also good code and lightning fast.
With regards to the "readability" vs "efficiency" thing, I'd agree that they don't have to be entirely at odds. In the version of my kd-tree posted here I do feel I sacrificed too much readability with the aim of gaining performance, but in the rewrite (here) I found that almost all of the readability I sacrificed in the earlier version was unnecessary. Really, a good number of things that you might think are not optimized turn out to be taken care of by the JIT, so you just need to worry about higher level aspects in most cases.