|Thread title||Replies||Last modified|
|Decision Speed Optimisation||3||17:06, 5 May 2013|
|Speed and energy consumption||2||14:07, 13 March 2013|
|Vandalism?||2||19:45, 23 November 2012|
I was playing around with a speed comparison with manual point-in-field checks to the Rectangle2D.Double.contains(x,y) method, and discovered that if I used && between the booleans I got pretty much exactly the same speed, but if I use & I get double the speed and exactly the same result. I'm guessing this is due to my CPU being able to evaluate multiple boolean expressions simultaneously, but if && is used only one is allowed to be executed at a time, or perhaps & just allows the pipelines to stay more full because there is less branching.
Of course, there are some reasons you might want && instead, like null checking before examining an object property, and && is smaller codesize for those bots that are codesize restricted, but particularly for high-load situations like the inside of precise prediction or play-it-forward loops, using & for your decisions might gain you some speed =)
Nice. This is a kind of optimization that you can't figure out by doing CPU profiling alone.
That's interesting and is something I didn't know. I don't think there would normally be parallel evaluation going on, so I do think it's the lack of branching and that bit-wise operations are just very simple/fast. I found a couple StackOverflow questions about it too, where people mostly mention branching.  
I'm probably in the "not worth the cost to readability" camp, but I know you're a little closer to the "anything for speed" end of the spectrum. :-)
I'm actually implementing a rotated Rectangle2D.Double so that I can evaluate out-of-bounds PIF with a projected replay in Neuromancer. I was worried that it would be much slower than a standard .contains(x,y) function, so I was testing that. When I hit on the bitwise operations I thought I'd try a standard non-rotated function, and it was twice as fast as the awt.geom one. My rotated function is now slightly faster than the awt.geom as well.
I agree that sprinkling them around liberally isn't really going to make much of a difference, and will affect readability, but for something where you just have a bunch of numerical comparisons and the costs for all the branches will be more expensive than any saved time from the short-circuit provided in &&, this could be a decent speedup. The only real applications in RoboCode that would be intensive enough and have enough decisions to make this worthwhile, at least that I can think of, are in-field-bounds testing and range searches in Kd-Trees.
Slow bots doing the same thing as fast bots should consume more energy (hardware power, not robot life) per battle IMHO because of longer CPU usage.
Usually, CPU is in one of two states, active or sleeping. Only sleeping state consumes less energy, and all bots, fast and slow, stay in active state all the time.
I don't really care if a robot uses a lot of time. The time allowed is already limited by the Robocode engine, and if Robocode provides the time, I don't see anything wrong with using as much of that time as you want. Of course, it is nicer when a robot runs quickly. Regardless, it is tempting to work on a way to measure the CPU usage of all the robots in the Rumble, as it would be very interesting to see.