Talk:Oculus
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
Skipped Turns | 8 | 12:45, 12 September 2017 |
Preferred Distance | 1 | 11:26, 8 September 2017 |
Movement | 13 | 15:34, 3 September 2017 |
Bullet Shielding | 22 | 13:34, 1 September 2017 |
This is bot just about Oculus but it happens a lot to me. I do the things below.
- Start the battle
- Maximise the speed(Oh no too many skipped turns)
- Minimise the windows(Wow my robot is faster than Shadow. No skipped turns)
Is this normal. Maybe something about robocode robot painting time. Debug graphics are closed by the way.
Yeah, I also noticed this. Robocode seems to give every robot a certain amount of time to do calculations. Whenever I activate debug graphics, the skipped turns become less. Maybe because Robocode provides more time in that case. When the window is minimized, battles run faster. Maybe the time for painting the battlefield is added to the regular bot calculation time, but I am not sure.
Does anybody know whether there is a way for the robot to know how much time he can spend before it would lead to a skipped turn?
I would say simply try and save data ;)
e.g. just run your code like normal, but with time recorded. when skipped turn recorded, try to limit run time and save that. Only when skipped turns are not happening, the time limit will not decrease, then you know it.
However if the skipped turns are happening occasionally, you may end up limiting yourself too strict ;)
If your robot does skipped turns you can't measure the time.=)
Skipping turns is disabled when debugging graphics is enabled. This is so you can do extra work in your debugging compared to regular without skipping turns.
The thing I wonder is that why skipped turns stop when I minimise the window.
It could be that painting generates a lot of objects for the garbage collector.
The permitted amount of time per tic for a robot is given by "cpuConstant" (see the source). On a modern computer it is about 5 -- 6 mS. You can check/recalculate it via the menu in GUI, see under options. It is reported in nano seconds.
If you want to time your bot, I would recomment to use nanoTime() call for profiling. You can see how a bit more fancy profiling done via a special class at [[1]] I am using it in EvBotNG. If you look at console, you will see profiler info at the end of every round.
Mine is a bit slow. It's about 6.4 milliseconds.
- Oculus uses a preferred distance of 450. WaveSurfingChallengeBotC gives about 400 damage in a match.
- Here is the problem:
- When I used a preferred distance of 100 Oculus got only 270 damage.
- The more interesting thing is that when I used a preferred distance of 650 it got 800 damage.
- Isn't this a bit weird?
Oculus's movement fails. I can't download any of the movement challenges(All of the links are broken or the site is closed). I don't know how to be sure that it is improved. Are there key bots to test against?
- I found a proof that my movement is a failure. Here is a battle result against Raiko.
- dsekercioglu.OculusRaikoGun* 2437 (52%) 950 190 1151 147 0 0 19 16 0
- jam.mini.Raiko 0.43 2214 (48%) 800 160 1111 143 0 0 16 19 0
You can try to download them from archive.org even when they are broken ;) Hope the links are archived.
Find the bot you think you're notably failing, enable some graphic debugging and watch some matches at different speeds. Go back to your code, wonder what the hell is going on. Repeat. That's how I fixed most of the bugs for my last release. Maybe It will work for you :P
I don't think that it is because of a bug. Every movement I make is really bad. My old bots were built on BasicGFSurfer so no bugs I think.
Hmmm, it's neural, right? Is there any case of success of neural surfing besides Darkcanuck's bots?
As I know Only Pris uses Neural Surfing. Even if this movement is one of the worst in top 100 it is my best movement=). I think that I should tune it more against mid-level guns so it would be better against general.
Wow, was looking for this (old) challenge! Thank you, MultiplyByZer0.
Regarding the experiment, I know NN has a problem of slow learning since there isn't much data at the beginning of the game. Couldn't a reinforcement of firing waves in the first few rounds solve the problem, though? Another suggestion would be to use waves with low virtuality (those tick waves which are close to a firing tick) to suppress the lack of information without polluting the network with flattening-like data right on the first rounds.
I did something like that in my gun and it improved a lot. Of course, I'm no reference in Neural Targeting: I improved from a really bad gun to a miserable one :) Well, maybe you've already done that after all.
You should be getting 99+% against Bot A. If not, you have bugs in your surfing, or you aren't even attempting to control your distances. Look to Komarious or CunobelinDC for help here, and make sure you are predicting the same escape angles you intend to move in.
Once you've done that, if you aren't getting 95% against Bot B, you might still have bugs in your surfing in the attribute collection, or you need to improve your learning. This is the most simple learning, a simple linear relationship between forward velocity and guess factor. If your learning algorithm can't quickly learn a simple linear relationship, you need to rethink it. I would suggest using a super simple learning (8 bins for the velocity value, plus a lower weighted "all the data") to make sure you have the attribute collection correct, then move on to fixing your learning.
Finally, you should be getting 90% against Bot C. This can only be improved by adding better attributes that you think might inadvertently model your near-wall and escape-angle behavior.
Hope this helps. Better scores are of course possible, but are very design specific. However with early non-DC versions of Cunobelin I was able to get a 99.9 - 96.8 - 95.1 score, and this was just BasicSurfer with segmented learning and distancing.
Looking at your results, I was quite curious about bullet shielders. It's the first time I actually understand this strategy, just watched a game. You get almost zero score against DrussGT, for example. It just sit there and block your bullets. I think I may safely assume those were all head-on shots. One thing I noticed is that DrussGT can't block Roborio, even though it was supposed to shoot head-on for the first shots, while there is no data. Does that mean that I'm not shooting head-on at all, right? That's weird...
One thing that is very important is that a BulletShielder needs to know the EXACT firing angles of your targeting. Even if there is a little deviation, it just don't work. That's why SimpleBot and Roborio should has no problem with shielders.
Yeah, what intrigues me is where this deviation actually come from, since there is a lot going under the hood, and why Oculus doesn't have such "problem". I know it makes perfect sense. It's just too obscure.
First, a BulletShielder would simulate a traditional HOT that is aimed from one tick before, not real fire position, as that position is your position when aiming (onScannedRobot). If it still hits, it would fallback to simulate a state-of-the-art HOT that is fired from real position, which means an advanced firer that predicts its own movement one tick forward on aiming.
However, if you are firing at real position, it is impossible for a BulletShielder to shield without moving. Therefore, for those who fire at real position, a shielder must move a little to be able to shield. And that's where the deviation comes from.
Therefore for a learning gun which fires from real position and is able to learn that tiny move, BulletShielders don't work.
Hmmm, it should be obvious, right? Traditional HOT isn't actually going straight to the enemy's center, so the bullet's intersection has positive length. Anyway, I also thought that the condition for bullets intersecting considered somehow imprecise calculations and that this would be enough to catch traditional HOT bullets, like the use of some epsilon when checking for the intersection. But it seems it's more like the intersection of the segments having positive length or something.
Thanks for clearing it up!
No, the bullet intersection code of robocode is exact, and the deviation of traditional HOT must be calculated exactly as well.
What do you mean by "positive length"? I thought every length should be positive ;p
And I also thought an intersection of two segment (bullets are segments when calculating intersection) is a point ;p It has no length at all.
- Oculus doesn't have such problem because it does some checking to be sure that the bullet would hit %100 if enemy movement predicted correctly.
- Something like this
boolean fire = (gunHeat == 0 && a.getGunTurnRemainingRadians() < 18 / (currentBattleInfo.distance -18));
I would say just use getGunTurnRemainingRadians() == 0, as any deviation would decrease your hit rate if your statistics is right.
In the next version I will make it even more precise(with anti-bullet shielding of course). This version uses the angular bot width.