Talk:ScalarBot/Version History
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
:D | 16 | 11:30, 22 October 2017 |
WaveSurfing rethink | 19 | 05:27, 17 October 2017 |
What bug can cause zero score? | 3 | 17:37, 12 October 2017 |
Virtual Gun array settings? | 6 | 02:55, 9 October 2017 |
First page |
Previous page |
Next page |
Last page |
"Just created a new version that wins 66% constantly against 0.012n1.14c"
Oh, god, do not tempt me. Just release it, please! :P
Was it a movement change or a AS change? Looking forward to see its impact against the top bots and the rumble.
Well, it’s a pure tick flattener movement. It improves my score against several good bots as well. But it doesn’t work against weak bots ;)
Wow, nice PWIN scores! Is this just using a tick flattener? I know that was something I added to DrussGT but never saw any real benefit from - then again, I never weighted it at 100% either.
Thanks a lot! The published version is using one single tree as a tick flattener and another tree as ordinary hit stats. The secret is that I don't accumulate the danger from the two trees (like some logic "or"), rather, I multiplies them (like some logic "and"). This way, I'm moving to where they are not probably firing at, instead of avoiding where they are probably firing at ;)
I think this approach makes my movement even more unpredictable than a) hit stats only; b) flattener only; c) the sum of both. As for strong guns, they are firing everywhere, which leaves no safe spot (except for bullet shadows, which, should for the same reason, improve the score dramatically).
Worth mention that I take the idea originally from ABC, but I could not recall where the page is ;/ I had tried this idea years ago, but it wasn't work IIRC.
Interesting approach! I had a similar idea around multi-bot targeting in Neuromancer where I do an XOR instead of simple OR because the bullet can only hit one of the potential enemies. I think this idea might have some real potential - although I think maybe using fire-only waves might get you even better scores against a lot of enemies and their anti-surfer guns.
You can be sure that there will be some experiments with DrussGT in the not too distant future =)
More thoughts on this.
When you only work with bullet hits, you can only be reactive to changes in the enemy targeting. Modern bots are designed around this, and they do a really good job with the limited information they have available too (see for example DrussGT's score on the Shark Challenge part 2) - even with complicated learning guns like RaikoMicro it is possible to effectively predict and dodge to get better results than a random gun would give.
However, the holy grail has always been to somehow predict where the enemy will shoot even before finding any bullets there. Theoretically we have the information we need to do that - we know where we were, we know what GFs for both hits and visits were logged, we could even model the type of gun the enemy has based on the bullet hits and (theoretically) transfer this learning across to the visits data. However until now there hasn't been any successful demonstration of using this pre-emptive data beyond just making a movement that is "flat with flat sauce" rather than taylor-made to dodge a specific gun.
I know in the past [[User::Voidious]] did quite a few experiments around adding very weak tick-wave flattening against mid-level opponents but was never able to realise any measurable gains. If this is able to be replicated across others bots and stats systems I see this as a great step-wise improvement in the state-of-the-art of Robocode, much like taking advantage of Bullet Shadows.
Well, I think this approach first helped me against their Main guns. Then, as unpredictability improved, their AS guns also have some trouble hitting me. Anyway, adding virtuality dimension may further help ;)
I've also been thinking about simulating the fact that bullets can only hit one enemy at a time. But what I came up with is to use max(enemy1, enemy2) instead of enemy1 XOR enemy2, since when I have 50% probability hitting enemy1 and 75% probability hitting enemy2 at the same bearing offset, I'll end up 75% probability hitting an enemy, instead of 125% ;p. Anyway, by XOR, do you mean to do some max - min? Or "enemy1 + enemy2 - 2 * enemy1 * enemy2" like fuzzy XOR?
Congratulations for 100% PWIN! Now it's only a question of time until you refine the tick flattener feature to work well against weak bots too ;)
Thanks! The tick flattener once improved my score against some relatively weak bot, but I think it may never work to help me dodge HOT bullets ;p
Potentially, this could be used to improve against any learning enemy.
Even though I have a bot that used to rank relatively high in the 1v1 division, I couldn’t think of myself fully understanding what I was doing and why it works. I was always assuming some GF targeting which fires at the most frequently visited gf, with the most popular attributes in mind. (e.g. segmenting on lateral velocity, accel, wall distance, etc. ).
And even though I tried to consider more types of enemy targeting strategies later, I was still assuming some specific targeting strategy.
But today, after thinking about all that in dreams, an idea just came up.
Can we just don’t assume anything about enemy strategy? Be tough yourself, and they’ll automagically have some trouble hitting you.
But that’s not enough for a top movement. Besides not showing weakness in all senses, it’ll be pity if you lose the chance to be better dodging them.
Statistics will always tell you the truth — once you are sure that they always fire head-on in some situations, why don’t you try your best to make them see the same situation again when aiming? Yes, I’m talking about automagical stop&go, but in my observation far more guns have similar weakness.
Besides firing situations, when you are sure about that they are very likely to fire bullets they’ve fired before, downgrading to traditional wave surfing seems good. And for else, why risk dodging somewhere they aren’t firing at? If their targeting looks quite random (at given firing situation), sitting still or moving randomly are also good choices. And you won’t risk hitting the wall or get yourself stuck somewhere as well if you don’t move at all.
For every bit of the future you can predict, you can always know what you can do better. A lot of bots are too strict, imo, following the design (and the assumptions behind) strictly. But I think we can do better, give more freedom to the bot itself (who always knows the situation better), rather than planning everything in advance. Given the success in GF targeting and then kNN, I’m pretty sure there are still a lot to explore even in today, and there are still a lot for bots to improve.
I tried something like this in DrussGt 2.6.0, and debugged it until about 2.8.0 when I finally gave up on it. I've had some ideas since then that might help, but at least in a wavesurfing-style framework I was never able to get it to work. Maybe it requires looking further ahead than I did (I only did one wave), or maybe more penalty for entering unexplored parameter space, but I never managed to get any benefit from it.
Give it a try though, if you prove me wrong and find some value behind it I might just have to dust off DrussGT ;-)
The really important part imo is to decide when to use this strategy. For LT/CT guns this will definitely help, but for guns with distanceLast10, timeSinceXXX, etc. doing so is rather hard — but when the time before impact with wave allowed us to do so, instead of decel randomly like DrussGT, I think this system could give us a better choice ;)
It really helps actually. I once made a bot that always stopped before enemy fired so the enemy always saw the same situation. The problem was that it decreased my MEA a lot but it would be really good if you tried. I have the same thoughts but toooo lazy to do it.
I don't know if I fully undestood your thoughts, but specifically about the auto stop &go thing: I thought of something like that sometime before and it seemed like an amazing think. Most of the guns today are really predictable, and those which are less predictable are just being differently obvious at each situation. The data we have gives us statistical clues of situations the enemies are more obvious. What if besides moving into safer regions we take into account our gun heat tracking and put our enemy into a obvious situation when it is firing. Of course this takes a lot of prediction capabilities because we dont know where the enemy will be next, and different implementations of this idea can lead to very different results. So I think that even if it was already tried before, it is worth another shot.
I've come up with that exactly when I thought about stop&go and why it is good. Its not only about giving no clue of where are you moving to, but mainly about being over and over at a situation where most of the enemies will be kinda obvious. But we do that because we know it. Let's just let our stats decide which situation is that for us :)
Yes you can never predict firing situations if their movement is not fully predictable — but you don’t really need to get into the exact situation as well. for gf targeting, being near is already not distinguishable most of the time, and for lat vel = 0, lat accel = 0, or time since decel or things like that, you don’t need to predict their movemenr exactly to get to the exact situation.
The only thing I see here is that, well, getting to a safe place is fairly easy given robocodo physics. But being at a safe place when the wave breaks AND at a obvious situation at the same time with too litle reaction time is harder, besides requiring a more complex type of GoTo movement to cover enough possibilities... so yeah, it's perfect theoretically but really hard to work in practice, but Ill definitely try that in the future
I agree that there are many more possibilities in wave surfing and GF targeting that people didn't think of yet. This is what makes Robocode still so interesting, even though, when I look at top bots, I often marvel at how amazingly good they are :)
If a bot is already very strong and we want to make it better, it mostly boils down to reverse engineering. How well can we understand the enemy bot and therefore exploit his learning behavior?
Yes, a lot of work resembles reverse engineering — but even we know their code, we can still have some trouble knowing their behavior (have a look at the open sourced top bots ;) )
So instead of come up with an idea about a specific opponent, why don’t we build a system that does “reverse engineering” automagically? Since what a bot can do is limited, a lot of the knowledge about the opponent is wasted — but what if we reverse the direction, starting from what our bot can do, and see whether the opponent has some weakness respectively?
Even if you reverse engineer a bot, there is always randomness. HawkOnFire is well understood bot, but even top bots get only around 30% hit probability. Even Walls is not that predictable unless you really fine tune for its specific algorithm.
Once we have randomness, the best you can do is to collect statistic. Which is rather unfortunate.
Long time ago I have thought about polluting the stats of the opponent, but at that time I couldn't think any further than using the BulletHitBullet for such thing. Specifically going to the place where you would have been hit seemed cool. That idea was soon overtaken by BulletShielding and a bit later by BulletShadows. I think that the major part of new development will be in the movement. Targeting has so much more info than movement, that I don't see any groundbreaking things there. Maybe not automatically shooting when your gun is cool but to wait for a better moment, using tricks to 'hide' shots into other events, but maybe something is waiting around the corner.
For movement a lot more ground is open and I do have some ideas there, especially screwing up the stats of the opponent, but frankly I have no clue how I could implement them. Do note that a lot of better bots are very similar in their handling of gundata, maybe not in implementation, but surely for how they interpret data and act on it. For now I'll concentrate on a flattener, necessary for the GigaRumble, and a second attempt on BulletShadows. The first attempt already failed before I starting shadowing . . .
Yes, although I don’t know exactly why BulletShadow works, some guess may be that we are polluting their data ;) For slow learning guns, doing so makes them even more predictable — and for fast decaying guns, they just decays the relevant data for shadowed locations;)
Amazing part about polluting that it makes the top bot vulnerable. If you look at MoxieBot it performs better against the top bots (relative to its APS neighbors). Unfortunately, MoxieBot has a bug which does not let it shine in the current rumble.
I recall I once made a mistake of counting virtual bullets as successful hits without looking at bullet hit bullet events. Bullet shielders just exterminated my bot. Since I was shooting at the most probable GF but it was protected by the shield.
I am always shooting at the most probable GF, I just know how to bend the trajectory just enough to hit what I want (saw it in a movie with A. Jolie)
Recently I noticed some very very low score, some are run on my machine, none of which can be reproduced even with thousands of rounds. However, when some exception is thrown in one round, it will only affect that round.
Then if the zero score is caused by uncaught exceptions, it must be thrown in every round. What else can get my bot disabled in every round? It seems that my bot is not firing even one bullet, or moving even one pixel, in the rest of rounds.
I had this problem about one years ago and I fixed it by finding an infinite loop that doesn't use getXXX() or setXXX() methods. Another thing I did was rewriting everything again starting from the last version.
Thanks for sharing! Do you mean an infinte loop will disable your bot for every round? It seems that infinite loop will cause the system console print something like (xxx not stopping, trying a force stop), e.g. Krabby is printing 35 of that messages per pairing on my machine. So that should only affect one round by it self. Anyway, I’m not receiving that message on the broken battles.
Yes, I mean that but if you don't call the XXX methods, the program doesn't throw an exception. It causes the bot to get stuck and stop updating itself or sometimes it disables the bot.
I've been thinking about that for ages — Is it better to use guns with similar attributes & weights but different decay rate, or guns with very different settings?
I was using the former in 0.012m7, then tried some very different weights in main gun in 0.012m8.
What's very interesting is that with main gun only, the performance is increased (comparing 0.012m8.1 with 0.012l29) after more battles, the performance decreased considerably ;)
But when put into VG array, the better gun results in considerably decreased performance (comparing 0.012m8 with 0.012m7)
What I'm experiencing is that, two strong guns, when combined, resulting worse performance — and the improvement of one of them is making the combination even worse ;)
Note that my VG is selecting the best gun based on normalized total hit rate, without any decay.
This discussion about CC VG may help, though I've never experimented with it since I have other things to improve besides my VG. Link
Intuitively when you put two similar guns, it's hard for the VG to actually differentiate between the two. On the other hand, when you have two totally different guns against a learner, you are on the case that when you are scoring your secondary gun, he is actually reacting to your primary gun, and if they are really different the hit rate may not be so meaningful.
Anyway, most of the top bots today use a simple VG array based on hit rate, so their authors must have experimented more with the weighting schemes than with the strategy of picking the best gun.
Since AS guns are secondary (most targets still don't have strong surfing movement), imo it's worth sacrificing it a little bit to make sure it is not chosen against non-adaptive targets.
But the problem with the former, in my experiments, is that the AS gun is firing almost the same with main gun a lot of time, and winning in border cases with luck (when main gun and AS gun is both firing at the right direction, but main gun miss by, say 1px). But even against non-adaptive targets, how bad the combination did, comparing with solo, also surprised me. (against RaikoMicro, sometimes the rating says even the AS gun solo is better than the combination)
Virtual guns against a surfer is nasty business, since your gun learns a movement that will change once the VG enables it. I tried to keep my VG array to a minimum, and instead focus on making 2 really good guns, plus a random gun for future proofing.
First page |
Previous page |
Next page |
Last page |