Talk:Diamond
I can't believed it, it almost the same thing I'm doing in my TheRiver! Anyway, nicely done! Here are my result that I just ran if you want (EDIT: more, and the debugging graphics is the exactly same as one I'm creating) » Nat | Talk » 06:16, 15 May 2009 (UTC)
Rank | Name | Score | Surv. | Bonus | Bullet Dmg. | Bonus | Ram | Bonus | 1st | 2nd | 3rd |
---|---|---|---|---|---|---|---|---|---|---|---|
1st | voidious.Diamond 1.0 | 14361 (13%) | 10850 | 540 | 2807 | 164 | 0 | 0 | 6 | 9 | 5 |
2nd | abc.Shadow 3.83c | 13453 (12%) | 10100 | 630 | 2633 | 89 | 1 | 0 | 7 | 3 | 5 |
3rd | rz.Aleph 0.34 | 12804 (12%) | 9600 | 270 | 2861 | 73 | 0 | 0 | 3 | 4 | 5 |
4th | abc.Tron 2.02 | 12261 (11%) | 8450 | 630 | 3052 | 127 | 2 | 0 | 7 | 3 | 3 |
5th | kawigi.mini.Coriantumr 1.1 | 11909 (11%) | 450 | 360 | 2998 | 101 | 0 | 0 | 4 | 3 | 3 |
6th | florent.XSeries.X2 0.7 | 9472 (9%) | 695 | 90 | 2349 | 84 | 0 | 0 | 1 | 4 | 2 |
7th | ags.surreptitious.MiniSurreptitious 0.0.1 | 9411 (9%) | 6550 | 0 | 2785 | 68 | 8 | 0 | 0 | 4 | 5 |
8th | tzu.TheArtOfWar 1.2 | 9240 (9%) | 6750 | 90 | 2355 | 42 | 2 | 0 | 1 | 3 | 4 |
9th | abc.tron3.Tron 3.11 | 8050 (7%) | 5400 | 360 | 2245 | 42 | 4 | 0 | 4 | 2 | 1 |
10th | davidalves.Phoenix 1.02 | 7443 (7%) | 5550 | 180 | 1679 | 32 | 2 | 0 | 2 | 0 | 2 |
- [View source↑]
- [History↑]
You cannot post new threads to this discussion page because it has been protected from new threads, or you do not currently have permission to edit.
Contents
Thread title | Replies | Last modified |
---|---|---|
Flattener? | 3 | 13:47, 6 December 2013 |
Bug in Diamond | 8 | 04:44, 1 June 2013 |
Congratulation .... | 1 | 17:17, 28 June 2012 |
New PL king | 8 | 06:59, 23 November 2011 |
Testbed | 8 | 23:08, 11 September 2011 |
Yep, it does. In spirit, the decision criteria and configuration are similar to Dookious. Maybe the biggest implementation difference, besides VCS vs DC, is that Diamond has a more sophisticated system to decide when to enable it. Dookious has "tiers" - like never enable it in first few rounds, then hit % threshold is 9% until round 10, 10% until round 20, etc. Diamond just uses a single hit % + a margin of error based on how much data he's collected, similar to an election polling formula.
If you want to check out the code, see "initSurfViews()" in voidious/move/MoveEnemy.java. The last 3 are for the flattener. A "view" is a kd-tree + all associated configuration data, like parameters to decide on k, max data points, decay rate, and hit % threshold before it's enabled.
see Flattener. Instead of moving to the places with the least amount of danger, move to the places you have been less often. Ofcourse depending on your configuration and segmentation. In case of a wavesurfer, recording every fired bullet instead of only the times the enemy really hit you, would result in a 'flat' movement => flattener. The biggest problem with a flattener is the decision when to switch to the flattener and when not. (pro tip: search the wiki for info)
Well, sorry to ruin Christmas break for you but...
I was watching a battle with Diamond, and I noticed that there were bullets in the area marked safe. The shadow was quite large, (large enough for me to be able to say it was one shadow) so, I don't think it's possible that the bullet that supposedly made the shadow hadn't passed yet (unless there were two shadows near each other and a third bullet that would have connected them had just been fired). In short it appears there is a bug with your bullet shadows.
Thanks for the notice! If it was a big shadow, it's probably a bug with merging shadows or angle normalization. I think that stuff's pretty tight so it might be tough for me to duplicate it enough to debug it. But I'll give it a shot when I have some time. Thanks. :-)
Oh, and finding a bug for me to fix in Diamond doesn't ruin anything, it's great. If I can't get to #1 with bug fixes, I have to actually think of innovative new features. ;)
Just to be clear - did your see if the bullet in the safe area hit one of my bullets before it reached me? Because that would not be a bug, it would just mean Diamond knew beforehand that one of his fired bullets would intersect any bullet in that range.
Yeah, I'm pretty sure it didn't but I MIGHT have missed it. I think I have a bug with bullet shadows it Gilgalad, and to test I just ran two 1000 round battles against hawkOnFire with and without bullet shadows. HawkOnFire's bullet damage without bullet shadows was 103 and with bullet shadows it was 192, so it appears I will be suffering with you...
Still searching for the similar bug in Gilgalad. I just found that it occurs in my intersection calculations (rather than the merging) which will make it more annoying to find. Perhaps I should have tested for bugs before optimizing those...
I discovered my bullet shadows didn't contain any 'impossible bullets' when I moved my bullets a tick forward - how about trying an empirical test for having your bullets a tick ahead/behind where you think they are?
So I tried that, and and fixed the possible "bullet start and end positions are in the wave, but it passes outside of it" bug. Still short of perfection (though there is a measurable improvement against HawkOnFire over 1000 rounds). Would someone mind explaining how robocode deals with bullets that hit both the enemy robot and their bullet on the same turn?
I'm not sure how Robocode deals with simultaneous impacts. I think you'd have to dig in the source code to find out exactly what happens.
I guess after 17k battles and still no 1. in melee general it is time to spring the congratulation cards :)
Congratulation for taking the melee Crown. Thats a hell of a bot you put there together.
Take Care
Hey, remove Tomcat from yours testbed - it's just little sweet kitty!:)
Enough to hurt my defenseless kitty!:) He even can not do multiply wave surfing right:)
Very nice there! Note to self: Better throw some of my new top-secret bulletpower research into Scarlet, to attempt make it more challenging for folks to get 100% in PL ;)
For the first time since 2004, I have assembled a testbed, but now I notice something I had not though about on beforehand. Some bots like Phoenix and ad.Quest, save data and that could have influence on the outcome. Do you use data-saving bots in your testbed and if so, how do you arrange that every version starts with a clean sheet against those. I want to alter my 40-bot testbed anyway, because 1 run (35 rounds) takes about 30-35 minutes. I still use a single core 2.66 GHz P4, although I have a i7 8-core laptop from work available. --GrubbmGait 14:39, 11 September 2011 (UTC)
I do use data saving bots and am just ignoring the issue right now, actually, which is perhaps stupid. But it should be pretty easy to modify RoboResearch to delete the robot.database and .data after every season, so they get rebuilt and saved data is cleared. That would be a very nice option to have. Are you using RoboResearch?
One little note, is another option would be perhaps to modify RoboResearch to enable the option in new versions of Robocode, to obfuscate/randomize enemy names?
I thought that would return a consistent name for the same bot each time, just preventing you from pre-load bot-specific behavior? I started browsing the code just now to try and say for sure, but I wasn't able to uncover the answer as quickly as I'd hoped...
A wait... it looks like the "anonymous names" are just like "#1" and "#2" according to their position in the participants in the battle... That's unfortunate, because because unlike randomized ones, that would really confuse data saving robots by giving them polluted results.
See "getAnnonymousName()" in net.sf.robocode.host.RobotStatics
On second thoughts, it is not really an issue. Most bots that save data do it for each version separate. Only bots like cx.BlestPain that save data without version information I should remove from my testbed. And ofcourse regularly removing the contents of 'working_dirs'. Any tips about contents of the testbed? I now selected 10 of my 20 worst White Wales, 10 bots out of the top-30, and 20 bots ranging from 40-140 with a PBI close to zero. Still busy with 30 seasons for 30 minutes each, sigh. --GrubbmGait 19:24, 11 September 2011 (UTC)
These days I generally go for big test beds generated by BedMaker. Like right now, my main test bed (when working on APS) is 100 random bots that Diamond scores <= 80% against. Your test bed sounds well put together to me - improving against problem bots is good, and having a diverse set of other bots helps to ensure you're improving in general instead of just specializing against a different set of bots.
Does Diamond still has more than 100 bots it scores less than 80% against? That's a bit disappointing isn't it ;-)
Ok, me and my big mouth. Diamond has around 200 bots that qualify for the above statement, GresSuffurd still has around 300. But running such testbeds for 30 seasons or so will cost you alot of time, certainly if you like to improve gradually (like me) instead of big-bang.