View source for Talk:ScalarBot/Version History

From Robowiki
Jump to navigation Jump to search

Contents

Thread titleRepliesLast modified
how to build a good test bed?3017:11, 16 June 2019
Runnable interface215:42, 29 December 2017
Java static keyword bug???016:13, 30 November 2017
0.014c803:08, 30 October 2017
Wall Smoothing918:39, 22 October 2017
:D1611:30, 22 October 2017
WaveSurfing rethink1905:27, 17 October 2017
What bug can cause zero score? 317:37, 12 October 2017
Virtual Gun array settings? 602:55, 9 October 2017

how to build a good test bed?

Recently I tried a lot to tune the movement, and the result is promising — it performs very well in my test bed (which consisits of some bots I’m performing bad in the past, including some guess factor targeting bots, dc bots and a simple targeter with VG. However, the rumble result shows a huge performance regression ;/

Then, I tried another one, when published to the rumble, it shows huge performance increase (and a little increase when full pairing) — but after ~5000 battles, the performance is even decreased, comparing to the baseline version.

My test bed is running at 35 battles for 30 seasons with 10 bots — 300 battles in total, but it shows irrelevant with rumble score. Is that the bots I choose make it a bad test bed, or just because I have too little battles?

The bots I use in my test bed are FloodHT, SandboxDT, RaikoMicro (gf targeting bots), Tron, Aleph (dc bots), Che, Fermet, WeeklongObsession (pattern matchers), GrubbmGrb (“simple” targeting)

Again, it seems that even after ~3000 battles, the rumble score is still not reliable enough to be used to compare two versions.

Then come my questions: How do you evaluate your bot? How much bots are there in your test bed and how many battles do you run for each of them?

Xor (talk)02:03, 27 September 2017

I think your problem that you are already in top 10 :) while you are testing against relatively simple bots (by modern standards). You probably already have score pushing above 90% for this bots. If I were you I would chose test bed from the top 30 or even top 10. But after all the only real test is the rumble, may be there is a bunch of bots against which you are under performing and none of them are in the test bed.

Otherwise I do something similar but my bot is not that high, so my test bad shows relevant scores. Though sometimes it is somewhat off. I also notice that the score in rumble always slide down until it settles. I am not sure why, may be some bot which save stats keep improving with each round for a while.

But lately I notice that in melee rumble slide down is somewhat catastrophic. When I introduced EvBot v9.2 it was in the top 20 for the first 300 pairing or so, and then just plunge about extra 20 places down. I see it with several latest releases and still cannot understand why.

Beaming (talk)03:35, 27 September 2017

IIRC, in the past versions, the improvement over previous version is somewhat good indicator of final result, e.g. 0.5 increase in APS of common pairings (e.g. 300 common opponents) indicates 0.5 increase in final APS.

IMO the APS until full pairing is meanless, but the difference in common APS is useful.

However, this version breaks the previous pattern. difference in common APS is no longer an indicator, nor the full pairing APS.

The reason why I test agasint relatively “weak” bots is that the majority of the rumble is there. And what affects your score the most is also there. More than half of the bots in rumble is between in APS [40, 70), and there are only 160 bots above 70, and 324 bots below 40. Bots below APS 40 can be ignored IMO, as the improvement against them can only be marginal.

Xor (talk)04:34, 27 September 2017

Until the pairing is complete, APS is not a good indicator. I always go to the details of my bot and then select an older version to compare with. In that case only the bots that both versions have fought, are taken into account. It indeed seems that the last 10% of the pairings involve the best opponents, GrubbmThree held around 58 APS till approx 1000 pairings, then fell down to 57.2. Note that even with 3000-5000 battles, there are still a lot of bots you have only have one fight against, so a few bad battles do have influence.

As for testbed, I used to have around 20 bots in my testbed (50 seasons): 5 top-50 bots, 5 'white-whales', 5 between place 100-300 en a few specific ones to check whether something was broke (f.e. bbo.RamboT must score less than 0.5%)

GrubbmGait (talk)12:07, 27 September 2017

Thanks for figuring out that! I thought the rumble is stabilized very fast (common APS diff of ~300 pairings are already useful), but it turned out not.

Maybe I should build a test bed with more varieties, e.g. bots from all over the rumble with different kind of strategies.

Xor (talk)14:19, 27 September 2017
 
 
 

I found that when newer version is tight with previous version, try to compare it with different older versions could help — or, the best, compare with some baseline version which is battled enough and is stable.

Xor (talk)15:40, 27 September 2017
 

It depends what I am working in.

For movement, often a single bot is enough to prove a theory. Escape angle tuning is a rambot plus DevilFish, surfing mechanics is DoctorBob, anti-GF RaikoMicro, anti-fast-learning is Ascendant and for general unpredictability Shadow or Diamond.

Targeting I always find less interesting. Maybe because it is a more pure ML problem, with less ways to optimise that haven't already been studied in a related field. I decided to brute-force it by adding lots of features and then using a genetic optimization to tune the weights against recordings of the entire rumble population, about 5000 battles. The surfers I did separately, but with the same process.

Skilgannon (talk)22:14, 27 September 2017

WoW Thanks for the sharing! In the past I only tune the movement agaisnt RaikoMicro by roborunner & carefully wathcing battles and that way works very well. Recently I tried some more brute force way but it seems not working. Maybe for an undeveloped ML area, some idea or theory is more useful.

recordings of the entire population — I’m wondering will it be useful to tune agaisnt wave surfers, which react to fire, in a way that their reaction is irrelevant?

Or can we just treat wavesurfers as some random movement that is not random enough? And with so many attributes, their reaction on fire will be inaccurate enough to be ignored and just proper decay is enough?

BTW, I’m really curious about how long it takes for a generation ;) And how many threads you are using to run it ;)

Xor (talk)00:40, 28 September 2017

Movement I find much more interesting - I think there is still a lot of unexplored potential here. Targeting can only get as good as the ML system though. The only tricks I see from targeting side involve bullet shielding and bullet power optimization.

For surfers I evolved the weights in multiple steps - record data, tune weights, re-record data, retune weights etc. I agree fixed data isn't ideal against learning movements, but it seemed to work ok.

By recorded battles, I actually just recorded the ML style interactions. So the only work to do in the genetic algorithm was parse input line, add to tree, and if it was a firing tick then do KNN + kernel density and N ticks later check if the prediction was in the correct bounds.

About 15 minutes per generation for an i5-2410M using 4 threads.

Skilgannon (talk)07:25, 28 September 2017

So only record gun waves seems ok? And IMO the gun prediction of each wave can be evaluated immediately, since the result is already known. btw, are you optimizing hit rate overall (e.g. total hit / total fire of all battles) or robocode score? (e.g. average bullet damage per battle). I think the lattar should be better when bullet power selection is also evaluated (or when it is not disabled). But since in real battles hit/miss will also affect total waves per round, that would be inaccurate for recorded battles. So how do you deal with bullet power? imo using the recorded ones sound reasonable, although not perfect.

The difference between evaluating overall hit rate and average bullet damage per battle is interesting. Seems that the latter will weight on damage per bullet. Also when comparing average hit rate per battle with overall hitrate, the former will weight battles on bullets fired per battle.

Xor (talk)08:57, 28 September 2017

I optimized for hit rate. Bullet power was kept the same as when it was recorded.

And I saved/loaded all waves (for learning), but only did prediction using firing waves.

Skilgannon (talk)10:17, 28 September 2017
 

So... each of those generations was evolved against those 5000 battles, right? What was the size of your population? I've tried my hands at genetic tuning some time ago but I gave up because it seems my evolving step was too slow. I'm wondering what was your population size when you got those 15 minutes, because one generation with 150 battles for me take waay more than that :/ I'll need some reference to optimize my targeting system.

Rsalesc (talk)22:30, 3 October 2017

From memory, population size was about 20. It was something between a gradient descent and a genetic algorithm, by moving from the stronger members away from the weaker members, plus some random component. Remember, I had already extracted all of the features etc, and saved them just before inserting into the Kd-Tree, so the only thing I needed at evaluation time was:

  1. read data from file
  2. add points to the tree
  3. KNN/KDE
  4. count inliers vs outliers -> give a score

Then at the end multiply the evolved weights with the code weights, recompile, and collect a new set of data; repeat until happy.

Skilgannon (talk)22:56, 3 October 2017
 

I’m doing nearly the same thing now. I write knn data points and gfs to files, so all I do is just:

read data from file; add to tree; knn/kde; count inliers vs outliers. and I’m only doing knn/kde on firing waves.

However it takes me ~10min per generation with only 1500 tcrm battles.

My population size is also 20, and I’m also using 4 threads. It’s Core i7 with 4 cores at 2.6Ghz, so it should be even faster than i5-2410M which has only 2 cores.

Are you reading data and adding to tree at the same time, or reading data to memory in one go and adding to tree then?

Xor (talk)02:34, 31 May 2019

It was read a line, add to tree, and if it was a firing tick do a prediction. For parallelization I just started a new thread for each bot, and join the thread when the bot is processed. It would probably br a bit faster with a thread pool.

Unfortunately I think I lost this code, I think it was on my University computer...

Skilgannon (talk)12:09, 5 June 2019
 
 
 

Holly smoke! Using the whole rumble for tune up. It probably takes half a day to have one generation in a genetic algorithm.

Beaming (talk)01:33, 28 September 2017

5000 battles on the fly takes me ~4hrs iirc. But recorded battles should take shorter imo.

Xor (talk)02:18, 28 September 2017

What is recorded battles?

Beaming (talk)03:21, 28 September 2017

e.g. WaveSim by voidious

Xor (talk)06:16, 28 September 2017
 
 
 
 
 

Runnable interface

Hi Xor,

Would you mind to elaborate on "Use Runnable interface instead of onTurnEnded Custom Event to execute(), which is MUCH faster when battle speed set to max"?

Why would it be faster if the code presumably does the same things?

Beaming (talk)18:38, 30 September 2017

Idk ;/ but the test result is obvious — against DoctorBob, ScalarBot 0.01e runs MUCH faster than 0.01f, and the only fundamental difference in 0.01f is the use of onTurnEnded event. The former ends instantly, but the latter takes noticeable time. Idk whether this can be reproduced on different machine, different OS or different java version, but it works on my computer.

Anyway, the use of Runnable (the traditional way) in 0.012k8 makes my APS drop by 0.1 points ;/ further research is on going, I think the only way is to read robocode source code on how runnable is handled.

I guess that for runless bots, robocode will generate a loop for it anyway. But the generated one is way slower for some reason.

Xor (talk)02:24, 1 October 2017

More information:

1. Have had a look at robocode source — when you don't call execute yourself, robocode will call that for you every turn, which is theoretically the same as while(true){execute();}.

2. I call rescan instead of execute to fix some missed scan bugs.

Xor (talk)15:41, 29 December 2017
 
 

Java static keyword bug???

I've been experiencing noticeable performance regression after a refactor, which theoretically with the only difference being an static keyword was added to a private final double[] (whose content is never changed).

removing that "static" keyword immediately return my performance back, and adding it back effectively decrease my score, so I'm pretty sure that is the reason. so here is my question, how Java handles static final arrays? may that be some bug introduced by optimizations?

Xor (talk)16:12, 30 November 2017

It's great to see ScalarBot will enter melee as well. I'm excited, there are going to be some nice battles :)

Cb (talk)10:01, 28 October 2017

Thanks ;) I'm also excited to see what's going on ;)

Xor (talk)13:49, 28 October 2017
 

About skipping turns: I've had an issue I heavily skipped turns in RR@H but I could not reproduce it against some pre-defined battles I had. I could get some skipped turns only when I run the exact same battles I got skipped turns in RR@H, so I had to log the exact scenarios that led me to skipping that much turns. Yeah, it's weird, but that's what often happens in Robocode. At least it forced me to do a bunch of weird optimizations, and I'm still skipping a bunch of turns btw. I don't know if you are already doing that but I hope it helps anyway. Excited to see ScalarBot performing in melee!

Rsalesc (talk)14:54, 28 October 2017

Well, I thought that skipping turns could be avoided by not using full CPU to run rumble (leave a thread for GC, etc.), as it seems that I skip turns heavily (and randomly) against every bot (however I've examined only two bots which is selected randomly by rumble@home). I've been already doing weird optimizations everywhere, which destroys readability & extendibility.

Anyway, I've never verified that skipping turns could be avoided in any way. But anyway, being able to handle inconsistent scans always helps, that's why I decided a full rewrite.

Xor (talk)16:18, 28 October 2017

That was my main issue when going from rumble to melee. I basically destroyed all my code beautifulness because of those constant optimizations, but they really showed necessary, it became evident that my code was really slow when I released Medina, and it actually helped a bit on 1v1 too as some optimizations were not that bad and actually gave me more room to work on Knight.

Rsalesc (talk)16:56, 28 October 2017

Just found why I'm skipping turns so bad —

if (Math.random() < 0.01) {
  ArrayList<Object> list = new ArrayList<>();
  for (int i = 0; i < 10000000; ++i) {
    list.add(new Object());
  }
}

I forgot to comment this out.

Anyway, I think I'm still skipping turns badly when I run two clients together, as a newer version which is theoretically the same as previous version performs much worse.

Xor (talk)06:28, 29 October 2017

I can never run more than one battle (1v1) at a time without skipping turns, because the computer I have at the moment is not well suited for this. Never even tried Melee. Maybe you should check if two Neuromancer battles would cause skipped turns and, if not, then specifically worry about ScalarBot.

Rsalesc (talk)06:42, 29 October 2017
 
 
 
 

Yeah, I always found it too difficult to mix one-on-one with melee in one bot (except ofcourse for Gruwel). The fields of melee and duel are so far apart in my mind, that I can't blend them together in one good performing bot. Btw, until you are ready to release the first of 0.014c series, there is no reason to pull back teh best performing 0.012n version. It is a killer bot and I am always happy to get trashed by a better bot. Next to that, reaching top-10 is a bit inflated, as it really is top-11.

GrubbmGait (talk)01:01, 30 October 2017

Yeah, agreed that it is hard to mix that. But my initial motivation is that melee capability will force me come up with a better framework, a better architecture and better tolerance of harsh running environment, e.g. heavy skipped turns.

Just put the discontinued series back as I don’t think inflating the rank is a good thing either. My initial motivation to remove that is to force me work on new series harder, but since melee is already appealing to me, that is no longer needed.

Xor (talk)03:08, 30 October 2017
 
 

Wall Smoothing

I'm sometimes seeing ScalarBot hitting the walls and turning it's gun in the opposite direction. Maybe a bug with your wall smoothing?

Dsekercioglu (talk)15:31, 22 October 2017

Yes I see ScalarBot hitting the wall quite often in the initial of the round, but I never noticed that the gun could turn in the opposite direction because of this. Is the gun turning away from the enemy?

Xor (talk)16:13, 22 October 2017
Yes, I took a screenshot but I don't know how to put it in the wiki. =)
It also turns it radar in the opposite direction.
Dsekercioglu (talk)16:14, 22 October 2017

You can use Special:Upload ;)

Btw, does that happen before first scan, or is that in the middle of the round?

Xor (talk)16:21, 22 October 2017

I think that it is in the middle of the round. Rechner's energy was 80 and ScalarBot's was 20. I'm sure that it didn't lose energy by hitting.

Dsekercioglu (talk)17:04, 22 October 2017

Just had a look in my radar — it never ever handles lose of scan. So if I ever skipped a turn, the radar may lose scan forever.

Btw, which version of ScalarBot are you testing? And are you running the robocode with Java 9?

Xor (talk)17:32, 22 October 2017
 

And is the radar and gun still turning when you take the snapshot? I think it may turn forever if it lost scan.

Btw, is your battlefield 800 x 600?

Xor (talk)17:36, 22 October 2017
 

Thank you, I uploaded it.

Dsekercioglu (talk)17:08, 22 October 2017
 
 
 
 

"Just created a new version that wins 66% constantly against 0.012n1.14c"

Oh, god, do not tempt me. Just release it, please! :P

Was it a movement change or a AS change? Looking forward to see its impact against the top bots and the rumble.

Rsalesc (talk)19:39, 20 October 2017

Well, it’s a pure tick flattener movement. It improves my score against several good bots as well. But it doesn’t work against weak bots ;)

Xor (talk)00:46, 21 October 2017

Wow, nice PWIN scores! Is this just using a tick flattener? I know that was something I added to DrussGT but never saw any real benefit from - then again, I never weighted it at 100% either.

Skilgannon (talk)13:19, 21 October 2017

Thanks a lot! The published version is using one single tree as a tick flattener and another tree as ordinary hit stats. The secret is that I don't accumulate the danger from the two trees (like some logic "or"), rather, I multiplies them (like some logic "and"). This way, I'm moving to where they are not probably firing at, instead of avoiding where they are probably firing at ;)

I think this approach makes my movement even more unpredictable than a) hit stats only; b) flattener only; c) the sum of both. As for strong guns, they are firing everywhere, which leaves no safe spot (except for bullet shadows, which, should for the same reason, improve the score dramatically).

Worth mention that I take the idea originally from ABC, but I could not recall where the page is ;/ I had tried this idea years ago, but it wasn't work IIRC.

Xor (talk)15:21, 21 October 2017

Interesting approach! I had a similar idea around multi-bot targeting in Neuromancer where I do an XOR instead of simple OR because the bullet can only hit one of the potential enemies. I think this idea might have some real potential - although I think maybe using fire-only waves might get you even better scores against a lot of enemies and their anti-surfer guns.

You can be sure that there will be some experiments with DrussGT in the not too distant future =)

Skilgannon (talk)17:41, 21 October 2017

More thoughts on this.

When you only work with bullet hits, you can only be reactive to changes in the enemy targeting. Modern bots are designed around this, and they do a really good job with the limited information they have available too (see for example DrussGT's score on the Shark Challenge part 2) - even with complicated learning guns like RaikoMicro it is possible to effectively predict and dodge to get better results than a random gun would give.

However, the holy grail has always been to somehow predict where the enemy will shoot even before finding any bullets there. Theoretically we have the information we need to do that - we know where we were, we know what GFs for both hits and visits were logged, we could even model the type of gun the enemy has based on the bullet hits and (theoretically) transfer this learning across to the visits data. However until now there hasn't been any successful demonstration of using this pre-emptive data beyond just making a movement that is "flat with flat sauce" rather than taylor-made to dodge a specific gun.

I know in the past [[User::Voidious]] did quite a few experiments around adding very weak tick-wave flattening against mid-level opponents but was never able to realise any measurable gains. If this is able to be replicated across others bots and stats systems I see this as a great step-wise improvement in the state-of-the-art of Robocode, much like taking advantage of Bullet Shadows.

Skilgannon (talk)19:21, 21 October 2017
 

Well, I think this approach first helped me against their Main guns. Then, as unpredictability improved, their AS guns also have some trouble hitting me. Anyway, adding virtuality dimension may further help ;)

I've also been thinking about simulating the fact that bullets can only hit one enemy at a time. But what I came up with is to use max(enemy1, enemy2) instead of enemy1 XOR enemy2, since when I have 50% probability hitting enemy1 and 75% probability hitting enemy2 at the same bearing offset, I'll end up 75% probability hitting an enemy, instead of 125% ;p. Anyway, by XOR, do you mean to do some max - min? Or "enemy1 + enemy2 - 2 * enemy1 * enemy2" like fuzzy XOR?

Xor (talk)01:51, 22 October 2017
 
 
 

Congratulations for 100% PWIN! Now it's only a question of time until you refine the tick flattener feature to work well against weak bots too ;)

Cb (talk)13:29, 21 October 2017

Thanks! The tick flattener once improved my score against some relatively weak bot, but I think it may never work to help me dodge HOT bullets ;p

Xor (talk)15:24, 21 October 2017

Potentially, this could be used to improve against any learning enemy.

Skilgannon (talk)19:25, 21 October 2017

Yes, but imo the "and" strategy could only be used against guns that fires at everywhere. Anyway, maybe some "(a and b) or (c and d)" strategy could be used instead, where a, c is hit stats, b, d is tick flattener.

Xor (talk)01:31, 22 October 2017
 
 
 
 
 

WaveSurfing rethink

Even though I have a bot that used to rank relatively high in the 1v1 division, I couldn’t think of myself fully understanding what I was doing and why it works. I was always assuming some GF targeting which fires at the most frequently visited gf, with the most popular attributes in mind. (e.g. segmenting on lateral velocity, accel, wall distance, etc. ).

And even though I tried to consider more types of enemy targeting strategies later, I was still assuming some specific targeting strategy.

But today, after thinking about all that in dreams, an idea just came up.

Can we just don’t assume anything about enemy strategy? Be tough yourself, and they’ll automagically have some trouble hitting you.

But that’s not enough for a top movement. Besides not showing weakness in all senses, it’ll be pity if you lose the chance to be better dodging them.

Statistics will always tell you the truth — once you are sure that they always fire head-on in some situations, why don’t you try your best to make them see the same situation again when aiming? Yes, I’m talking about automagical stop&go, but in my observation far more guns have similar weakness.

Besides firing situations, when you are sure about that they are very likely to fire bullets they’ve fired before, downgrading to traditional wave surfing seems good. And for else, why risk dodging somewhere they aren’t firing at? If their targeting looks quite random (at given firing situation), sitting still or moving randomly are also good choices. And you won’t risk hitting the wall or get yourself stuck somewhere as well if you don’t move at all.

For every bit of the future you can predict, you can always know what you can do better. A lot of bots are too strict, imo, following the design (and the assumptions behind) strictly. But I think we can do better, give more freedom to the bot itself (who always knows the situation better), rather than planning everything in advance. Given the success in GF targeting and then kNN, I’m pretty sure there are still a lot to explore even in today, and there are still a lot for bots to improve.

Xor (talk)14:08, 15 October 2017

I tried something like this in DrussGt 2.6.0, and debugged it until about 2.8.0 when I finally gave up on it. I've had some ideas since then that might help, but at least in a wavesurfing-style framework I was never able to get it to work. Maybe it requires looking further ahead than I did (I only did one wave), or maybe more penalty for entering unexplored parameter space, but I never managed to get any benefit from it.

Give it a try though, if you prove me wrong and find some value behind it I might just have to dust off DrussGT ;-)

Skilgannon (talk)18:10, 15 October 2017

The really important part imo is to decide when to use this strategy. For LT/CT guns this will definitely help, but for guns with distanceLast10, timeSinceXXX, etc. doing so is rather hard — but when the time before impact with wave allowed us to do so, instead of decel randomly like DrussGT, I think this system could give us a better choice ;)

Xor (talk)03:02, 16 October 2017
 

It really helps actually. I once made a bot that always stopped before enemy fired so the enemy always saw the same situation. The problem was that it decreased my MEA a lot but it would be really good if you tried. I have the same thoughts but toooo lazy to do it.

Dsekercioglu (talk)18:39, 15 October 2017

Yes, for guns without fancy attributes this should be more beneficial than MEA, as large part of MEA is always not reachable for most of the waves — and for those you can predict, they are not firing randomly, so decreasing MEA won’t make things much worse as well.

Xor (talk)03:04, 16 October 2017
 

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

Return to Thread:Talk:ScalarBot/Version History/WaveSurfing rethink/reply (3).

Yes you can never predict firing situations if their movement is not fully predictable — but you don’t really need to get into the exact situation as well. for gf targeting, being near is already not distinguishable most of the time, and for lat vel = 0, lat accel = 0, or time since decel or things like that, you don’t need to predict their movemenr exactly to get to the exact situation.

Xor (talk)03:26, 16 October 2017
 

The only thing I see here is that, well, getting to a safe place is fairly easy given robocodo physics. But being at a safe place when the wave breaks AND at a obvious situation at the same time with too litle reaction time is harder, besides requiring a more complex type of GoTo movement to cover enough possibilities... so yeah, it's perfect theoretically but really hard to work in practice, but Ill definitely try that in the future

Rsalesc (talk)21:09, 15 October 2017
 

I agree that there are many more possibilities in wave surfing and GF targeting that people didn't think of yet. This is what makes Robocode still so interesting, even though, when I look at top bots, I often marvel at how amazingly good they are :)

If a bot is already very strong and we want to make it better, it mostly boils down to reverse engineering. How well can we understand the enemy bot and therefore exploit his learning behavior?

Cb (talk)21:35, 15 October 2017

Yes, a lot of work resembles reverse engineering — but even we know their code, we can still have some trouble knowing their behavior (have a look at the open sourced top bots ;) )

So instead of come up with an idea about a specific opponent, why don’t we build a system that does “reverse engineering” automagically? Since what a bot can do is limited, a lot of the knowledge about the opponent is wasted — but what if we reverse the direction, starting from what our bot can do, and see whether the opponent has some weakness respectively?

Xor (talk)03:38, 16 October 2017

Even if you reverse engineer a bot, there is always randomness. HawkOnFire is well understood bot, but even top bots get only around 30% hit probability. Even Walls is not that predictable unless you really fine tune for its specific algorithm.

Once we have randomness, the best you can do is to collect statistic. Which is rather unfortunate.

Beaming (talk)17:44, 16 October 2017
 
 

Long time ago I have thought about polluting the stats of the opponent, but at that time I couldn't think any further than using the BulletHitBullet for such thing. Specifically going to the place where you would have been hit seemed cool. That idea was soon overtaken by BulletShielding and a bit later by BulletShadows. I think that the major part of new development will be in the movement. Targeting has so much more info than movement, that I don't see any groundbreaking things there. Maybe not automatically shooting when your gun is cool but to wait for a better moment, using tricks to 'hide' shots into other events, but maybe something is waiting around the corner.

For movement a lot more ground is open and I do have some ideas there, especially screwing up the stats of the opponent, but frankly I have no clue how I could implement them. Do note that a lot of better bots are very similar in their handling of gundata, maybe not in implementation, but surely for how they interpret data and act on it. For now I'll concentrate on a flattener, necessary for the GigaRumble, and a second attempt on BulletShadows. The first attempt already failed before I starting shadowing . . .

GrubbmGait (talk)22:10, 15 October 2017

Yes, although I don’t know exactly why BulletShadow works, some guess may be that we are polluting their data ;) For slow learning guns, doing so makes them even more predictable — and for fast decaying guns, they just decays the relevant data for shadowed locations;)

Xor (talk)03:47, 16 October 2017

Amazing part about polluting that it makes the top bot vulnerable. If you look at MoxieBot it performs better against the top bots (relative to its APS neighbors). Unfortunately, MoxieBot has a bug which does not let it shine in the current rumble.

I recall I once made a mistake of counting virtual bullets as successful hits without looking at bullet hit bullet events. Bullet shielders just exterminated my bot. Since I was shooting at the most probable GF but it was protected by the shield.

Beaming (talk)17:52, 16 October 2017

I am always shooting at the most probable GF, I just know how to bend the trajectory just enough to hit what I want (saw it in a movie with A. Jolie)

GrubbmGait (talk)19:14, 16 October 2017
)

But is your most probable GF precise enough? I.e. if the bin width is high you might miss the shield. Do I take your joke too seriously? :)

Beaming (talk)19:55, 16 October 2017
 

I don't think it's the fault of top bots, rather, it's just because it's neighbours are exploited by the top bots.

Xor (talk)05:27, 17 October 2017
 
 
 
 

What bug can cause zero score?

Recently I noticed some very very low score, some are run on my machine, none of which can be reproduced even with thousands of rounds. However, when some exception is thrown in one round, it will only affect that round.

Then if the zero score is caused by uncaught exceptions, it must be thrown in every round. What else can get my bot disabled in every round? It seems that my bot is not firing even one bullet, or moving even one pixel, in the rest of rounds.

Xor (talk)08:46, 12 October 2017

I had this problem about one years ago and I fixed it by finding an infinite loop that doesn't use getXXX() or setXXX() methods. Another thing I did was rewriting everything again starting from the last version.

Dsekercioglu (talk)08:53, 12 October 2017

Thanks for sharing! Do you mean an infinte loop will disable your bot for every round? It seems that infinite loop will cause the system console print something like (xxx not stopping, trying a force stop), e.g. Krabby is printing 35 of that messages per pairing on my machine. So that should only affect one round by it self. Anyway, I’m not receiving that message on the broken battles.

Xor (talk)09:04, 12 October 2017

Yes, I mean that but if you don't call the XXX methods, the program doesn't throw an exception. It causes the bot to get stuck and stop updating itself or sometimes it disables the bot.

Dsekercioglu (talk)17:37, 12 October 2017
 
 
 

Virtual Gun array settings?

Edited by author.
Last edit: 02:35, 9 October 2017

I've been thinking about that for ages — Is it better to use guns with similar attributes & weights but different decay rate, or guns with very different settings?

I was using the former in 0.012m7, then tried some very different weights in main gun in 0.012m8.

What's very interesting is that with main gun only, the performance is increased (comparing 0.012m8.1 with 0.012l29) after more battles, the performance decreased considerably ;)

But when put into VG array, the better gun results in considerably decreased performance (comparing 0.012m8 with 0.012m7)

What I'm experiencing is that, two strong guns, when combined, resulting worse performance — and the improvement of one of them is making the combination even worse ;)

Note that my VG is selecting the best gun based on normalized total hit rate, without any decay.

Xor (talk)17:15, 8 October 2017

This discussion about CC VG may help, though I've never experimented with it since I have other things to improve besides my VG. Link

Rsalesc (talk)18:33, 8 October 2017

Thanks for the link! It's good to know some of my random thoughts got experimented, and also many new thoughts I've never came up with beforef. However, I'm also surprised by the fact that others are also struggling ;/

Xor (talk)02:46, 9 October 2017
 

Intuitively when you put two similar guns, it's hard for the VG to actually differentiate between the two. On the other hand, when you have two totally different guns against a learner, you are on the case that when you are scoring your secondary gun, he is actually reacting to your primary gun, and if they are really different the hit rate may not be so meaningful.

Anyway, most of the top bots today use a simple VG array based on hit rate, so their authors must have experimented more with the weighting schemes than with the strategy of picking the best gun.

Rsalesc (talk)21:08, 8 October 2017

Since AS guns are secondary (most targets still don't have strong surfing movement), imo it's worth sacrificing it a little bit to make sure it is not chosen against non-adaptive targets.

But the problem with the former, in my experiments, is that the AS gun is firing almost the same with main gun a lot of time, and winning in border cases with luck (when main gun and AS gun is both firing at the right direction, but main gun miss by, say 1px). But even against non-adaptive targets, how bad the combination did, comparing with solo, also surprised me. (against RaikoMicro, sometimes the rating says even the AS gun solo is better than the combination)

Xor (talk)02:54, 9 October 2017
 

Virtual guns against a surfer is nasty business, since your gun learns a movement that will change once the VG enables it. I tried to keep my VG array to a minimum, and instead focus on making 2 really good guns, plus a random gun for future proofing.

Skilgannon (talk)23:03, 8 October 2017

Thanks, may be two ordinary guns combined to be even worse, but two strong guns are combined to be better ;)

Xor (talk)02:55, 9 October 2017