Thread history
Time | User | Activity | Comment |
---|---|---|---|
No results |
So, I have tried all of the ideas above, but am still skipping turns. Obviously I can still work on 3 and 4, but I had a question for the second wave points optimization: is a hashmap fast enough or should I try to make a custom class to handle that? Also, I ran Gilgalad 1.8 on Windows (yes, I admit it, I use Windows sometimes) to show some other people and noticed that it didn't skip any turns!?! Further tests show that it will occasionally skip turns but not nearly as much as it does on Linux (Fedora 16). This was all on the same computer but with slightly different versions of robocode (both 1.7.X however). Any ideas as to why it skips more turns on Linux?
My number one guess is... perhaps you are using a 64-bit OS in one case and 32-bit in the other case?That can have a significant impact on performance with pointers (which the JVM relies on much more heavily than typical compiled code)
Yes, the Linux installation is 64-bit but the windows is 32 bit. Thanks!
Maybe you have different CPU constants?
I spent about a month optimizing Tomcat and may give only one advice - accurate instrumenting for spent time measurements. Not only avg, but also min & max times. Only after i do it, i findout Tomcat's hotspots.
Didn´t try on Linux yet (yep, another Windows user). But in Windows 7, CPU affinity is having an impact on skipped turns. Which means different time sharing algorithms are leading to different amount of skipped turns.
Point 2 - I just use an arraylist for first wave points and another for second wave points. Put the first through Collections.sort() then iterate through. At each point calculate the min reachable second wave danger. However, if the first wave danger is more than the min first+second then break. I'm not sure why a hashmap would even help here?
Have you checked that your JVM is the same version for both? And CPU constant? 64 bit might be better at the raw throughput for the benchmark but about the same for stuff with lots of decisions in it, like a bot.
Actually 64-bit would probably be worse in some things too. Specifically, 64-bit mode means pointers take twice as much memory, thus effectively doubling memory bandwidth requirements for code that spends most of it's cpu time dealing with pointers between objects. For the case of robocode bots, deep recursive data structures are not too uncommon, and additionally Java is more subject to this problem than compiled object oriented languages, because Java doesn't allow non-reference composition of classes.
I meant would there be a faster way to cache the dangers of points on the secondary wave. For the first wave paths, I use an array and a quicksort that takes an array of dangers and returns an array of the order of the dangers, which was the fastest way I could think of doing this. I don't know much about hashmaps so I was wondering if there would be a faster way to store about 400 points and thenk check if any of them were identical to another point, on thinking it over, the points would probably be added in increasing or decreasing order of X and Y so I may be able to keep an arraylist sorted without much overhead.
Hmm. What I do is pre-calculate a whole bunch of points by precise predicting until a few ticks after the second wave has hit. Then I use these points as 'destinations' for my goto algorithm. Each point holds its own danger value, and this danger is the one used if the point is 'reachable', otherwise I calculate a new danger for whatever location the wave hits me at. That way I never run into problems with duplicate points, association etc.