View source for User talk:Voidious
For old discussions, see Archived talk:User:Voidious 20071111 and Archived talk:User:Voidious 20110909.
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
I can't enter the competition | 0 | 17:16, 7 February 2016 |
new PlayBerryBots.com | 0 | 07:51, 1 August 2015 |
BerryBots updates | 49 | 18:57, 2 February 2014 |
http://www.dijitari.com/void/robocode/ | 4 | 18:13, 1 December 2013 |
BerryBots pre-release testing help | 66 | 23:00, 29 October 2013 |
Client issues | 1 | 16:30, 18 August 2013 |
CPU benchmark advice | 31 | 14:43, 29 June 2012 |
weird bug I'm hitting | 5 | 06:03, 3 June 2012 |
First page |
Previous page |
Next page |
Last page |
I've packaged my robot Tomahawk but I don't know how to enter the competition. I found something about it but it was not detailed. Can someone help me?
Finally got the new web UI up - big things are it's in AngularJS (which is awesome) and has a "starter kit" where you can select from a few different movement and targeting snippets and it wires up a working bot for you. Pretty darn pleased with it! http://playberrybots.com
So I try not to spam you guys too much about BerryBots, but maybe I'll post about some of the bigger stuff.
A couple people have posted bots on the forums recently (both Robocoders):
- Frohman posted the first public bot, Insolitus, a battle bot that beats supersample.BasicBattler in 1v1 and draws about even with it in Melee (which means it also crushes the other samples). Some type of antigravity movement and linear targeting.
- Justin (of DemonicRage) has a bot called NightShade. It's the strongest 1v1 bot for now, also supports races and mazes, and has pretty sexy debugging graphics.
Next big thing coming is the Game Runner API, which I'm pretty excited about. I have it working now, probably releasing it in a week or two with some polish and related changes to results handling. Some cool things are that it handles multi-threading for you and makes it really easy to prompt the user for inputs (seasons, threads, ships, stages, etc). You launch a Game Runner script through the app kind of like starting a battle.
Just released v1.2.0 with the Game Runner API. I think it holds up pretty well in a comparison against the Robocode control API - it handles multi-threading under the covers, lets you configure input parameters and prompt the user for those fields with a graphical dialog, and includes some examples that do basic batch battles or a single-elimination tournament.
- Video overview
- BerryBots v1.2.0 details
- batchduels.lua gives a good idea how easily you can implement a basic multi-threaded batch battle runner.
I feel like this was the final prerequisite to the kind of real bot development like I'm used to. So I'm pretty stoked on that. :-)
And now that Game Runner is out, I added a leaderboard for the LaserGallery stage. Might work on a gun myself sometime soon: LaserGallery/Scores
This is pretty cool - HTML5 replays: 6-bot melee on battle1
(Sorry, seems broken on Linux/FireFox, still troubleshooting that one.)
I finally got around to writing a learning gun for BerryBots. At first I was just out to port Rednaxela's kd-tree to Lua, since I thought that was a prerequisite to the gun I wanted, and a prerequisite to testing correctness of a kd-tree is writing an optimized linear KNN search. But once I had a linear KNN search, I realized it would be plenty fast enough to run the targeting challenge stage, which is only a few thousand ticks per enemy with a decent gun, so I built out the rest of the gun.
So it's mostly just KNN with waves / displacement vectors. A precise MEA and GuessFactors is probably worth pursuing. But I think the next thing to start to handle is the presence of walls, which has a lot of ramifications.
It came out pretty good: testgun.lua. Not much code, runs pretty fast, reasonably effective. And it's a new high score on Laser Gallery.
Ahh, neat.
With regards to handling the presence of walls, that's a case where I see the utility of PIF versus displacement vectors and guessfactors as being significant. Guessfactors aren't really well-suited to the presence of walls. Displacement vectors allow you to handle walls occluding where they end up. PIF however allows you to handle not only walls occluding where they end up, but also handle ruling out paths which would be interrupted by walls.
Yeah, agreed that PIF has some key advantages. Checking wall collisions every PIF tick might be slow, I'm not sure. There's no trig, but it would be (# walls) * (# projections) * (# ticks per projection) line intersection checks - not sure how much that adds to the PIF CPU load.
I'm excited to try this in Melee, too, since you can see everyone at once. Though I may need that kd-tree pretty quickly.
You don't need to do that many line intersection checks. If you store your walls in a tree (i.e. r-tree or bucket kd-tree), or even a sorted list (sorting by one of the two axis), you can avoid needing to check every single wall by quickly ruling out large groups of walls that are too far away.
If your robot is using a pathfinding algorithm to navigate the walls, you could re-use that structure to find which walls you need to check (if any) to avoid needing new structures to search walls (i.e. track the current pathfinding node, and use that node's information to know which boundaries to check on a tick if any).
To save per-tick iterations, you can also use a trick I've used and seen used in PIF implementations: Based on the minimum number of ticks away waves/walls are, you can skip ahead in your PIF history, ignoring all the ticks inbetween.
In other words... nah... you can more or less reduce both the (# walls) term and the (# ticks per projection) term to much smaller numbers.
Thanks for the ideas! I love the kd-tree/sorted list one. I might have to go back and optimize it in the main game engine.
I'd say for that purpose r-trees are more suited than kd-trees, but yeah. From what I've heard, it's very much standard practice to use r-trees or other structures for collision detection in game engines where you have a very large number of entities/surfaces/etc which you may want to perform collision detection with.
With regards to 'other structures', so long as things are in a bounded size region and the density variation of objects is not too extreme, rather than a tree you'd probably get better results taking a very simple approach: Divide the area into a 2d grid, scaled so that you have a no more than a few entities in each cell, and in each cell store a list of entities which are within that cell. The only reasons to use a tree instead of a coarse array, is when the density of collidable objects/surfaces varies significantly between areas (i.e. you don't want lots of cells that are near-empty with other cells that have thousands of objects)
So thinking this through now, the walls in BerryBots have some interesting implications for PIF, too. For instance, if a log you're replaying includes that bot hitting the wall, what do you do? Or if a replay would cause you to hit the wall? In Robocode you might throw those out - it's rare and kind of a flukey situation, since you mostly never want to hit the wall. But there's no real disincentive to hitting walls in BerryBots, so it could be super frequent. Maybe you instead replay until the wall hit, then do a sort of single tick style re-projection for similar situations from that point on. And you probably want to precisely predict the wall bounce physics too.
Maybe what would make sense would be "fuzzy" wall collision handling that validates the projection against what happened in the PIF log:
- Detect and log when the enemy hits a wall as part of the PIF log.
- If the log that is being replayed does not contain a wall collision at a similar time, throw out projection if it collides with the wall by more than a certain margin (give leeway for the projection barely glancing by it)
- If the log that is being replayed does contain a wall collision, throw out the projection unless the projection also contains a wall collision at a similar time (or at least a near-collision)
The idea would be to filter projections based on an approximate presence/absence of a collision matching the PIF log, in order to ensure the scenario of that projection is sufficiently similar to what you'll be predicting.
Very cool ideas. I guess you might need to allow for more fuzziness depending on the wall layout / wall hit frequency. I still like the idea of re-projecting from time of wall hit, but I'd really have to just try both and compare because I just don't have much sense of which would work better.
It also makes me wonder about doing the same in Robocode PIF. You could compare snapshots along the projections to further narrow down which situations match most closely or throw out irrelevant projections. You wouldn't have as many wall hits to compare, but you could still compare wall hits and distances or other stuff.
I guess the problem is you mainly want to examine stuff that causes you to alter your path, which requires knowing the future unless you're talking about walls that don't move. But if a past move log brings the enemy far away from all other bots on the field, and projecting that log now causes them to run into a swarm of bots, that might not be tough to predict and easy to realize you should throw it away.
Back at this and the replay support's almost ready. Now with ship/stage output consoles, video player controls, and results at the end! [1] [2] [3] [4] [5] Still a few loose ends and optimizing to be done. (And I think Linux/FireFox, the combo specifically, is still screwed up.)
Some things I'm really excited about:
- The data collection is pretty fast and low overhead, so I can leave it on all the time in the engine. Last I checked it was a ~5% performance hit, which seems worth it.
- You'll be able to save replays from the Game Runner API. So for anomalous results, you can save the replay (which includes your output console). Like right now I have a 99.7% success rate on Laser Gallery and need to figure out what's up in those 3 of 1000 cases.
- Long term: I'd like to have a web app that lets you select a stage, some sample bots, input Lua code for your own bot, submit form to run match headless in the cloud, and view replay in the browser.
And man, now that I'm testing this with the Game Runner API, I am just salivating. You can run a tourney or a benchmark and literally save every replay to watch later without any problem, besides maybe disk space. (Looks like ~100 bytes per tick in these 1v1 matches I just ran. So 10,000 ticks would be 1 meg.) How cool would it have been to have replays for every match in the Twin Duel every week when it was running?
Sorry to boast, just excited... :-)
I'm not sure I'll make the time, but I don't see any reason we couldn't add this same style of replay support to Robocode. All you really need is fast/native buffers to cache all the pertinent data during the match in a lazy/efficient way. Java has ByteBuffers. I re-tested and with BerryBots, it really is only a 5% performance penalty with trivial bots. And of course some memory, but not even that much. And once you have sophisticated bots, the engine itself is only a small % of overall CPU usage anyway.
There's still a fair amount of work in coalescing/parsing the data, saving it, tying it into the GUI and control API, writing the replay renderer and all that, but no major technical limitations I can see. The biggest obstacle for me would just be getting my build environment setup to build Robocode, which I've never done before. The Javascript rendering could probably be forked from BerryBots, too, instead of starting from scratch.
Finally! BerryBots v1.3.0 is out with HTML5 replays and a killer new stage preview.
- Details: BerryBots v1.3.0
- Sample: vortex-2013.10.05-17.09.45.html
- Video: BerryBots replays
- Video: BerryBots overview v1.3.0
So now, out of the box, you can use the "simpletourney" sample Game Runner to run a single-elimination tournament, multi-threaded, saving replays of every match that you can post with the results. I think that's pretty sweet.
You can also save replays from your benchmarks. So for those 1 in 1,000 cases where you mysteriously lose to Walls or whatever, you can save the replay and watch the match afterwards. And replays contain your output console. I think it beats the heck out of trying to log all the right stuff to disk.
Those replays are really cool. I wonder if there's any way to get your stuff onto hackaday or the RasPi blog?
Thanks man!
Yeah, PR isn't exactly my forte. And BerryBots isn't the only programming game out there lacking players, so it's kinda tough not to come off as desperate. I'm a little more comfortable with it now that BerryBots is more polished with some neato features that I'm really proud of... But still, I'm more inclined to just focus on making it good and make a small PR push every once in a while.
I also try to keep in mind that I didn't even get into Robocode until 4-5 years after it came out, and that was years after the original author had stopped working on it. Programming games can have a pretty long and slow life cycle. Even here at the RoboWiki, we have some pretty long periods of low activity. And it even seems like Robocode is still growing in popularity! Which is kinda crazy.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page.
Return to Thread:User talk:Voidious/BerryBots updates/reply (30).
Now available in a browser near you. :-) http://playberrybots.com
Wow. I am impressed.
One suggestion: when mouse is on the battlefield window one one see humongous pause symbol right in the center blocking the view. May be putting it somewhere at the conner would be a better idea.
Thanks, that's good feedback. If you didn't notice, the controls disappear if the mouse is still for a certain time - I'm basically trying to emulate typical video player behavior. So I think I should probably do two things:
- Shorten the delay before the controls disappear on desktop (responding to mouse movement).
- Make the controls graphics smaller on desktop.
As a side note, the timer is actually much longer on tablets because I thought that felt more natural there. Probably because you typically make a lot more ambient mouse movements than touch events to extend the timer.
Well, as Skotty mentioned below it is FAST. Sometime player control has no time to disappear and game is already done. May be it is enough to respond on click event in the battle field area for "pause"? We are all sort of used to this behavior in players.
And I second Skotty, it definitely need slow down button. I think it exists in the desktop version.
Alright, I think these have been addressed, though I've kept the video player style overlay.
- Defaults to 30 fps (half speed) with options for faster.
- On desktop, pause/play controls are smaller and disappear more quickly (by ~30%).
- All the controls are shifted down a bit.
Thanks to both you guys for the feedback! I'm pretty happy with these changes.
Awesome. Very well done. Not that I'm an expert on the variety of these types of games, but I've never seen one where you could do all the coding and battles online before.
Is there a way to adjust the game play speed? I don't recall seeing that. It would be nice if play could be slowed down. The default speed seems so ADD.
Thanks. It's certainly not the first to run in a browser. Most of the ones I've seen use Javascript for the bots and actually execute the game engine in the browser too. This is offloading the game processing to an Amazon EC2 instance and then viewing the replay. Also, some web based games are like a whole platform where you upload your bots, compete in ladders... This is more of a stand-alone try it / demo thing, at least for now.
In the desktop app, you can adjust the playback speed, but I didn't get that into the replays yet. And I've been considering switching the default from 60 to 30 fps... I guess that's one vote in favor. :-)
BerryBots v1.3.2 brings Game Runner API and replay/web rendering updates, plus a bunch of bug fixes. [1]
But what I'm really excited about is it adds functions to the Game Runner API that let runners write to the ships and stages directories. Since Lua is interpreted, there's no compile step necessary and this is all that's needed to make genetic algorithms (and similar) possible. I included a simple example program that evolves a ship to solve a maze: MazerEvolver.
By the way, I ripped off the "Chronicle" idea for BerryBots: BerryBots Chronicle of 2013. (Not that the RoboWiki invented the end-of-year recap...) It was a big year!
Kicking off the first real BerryBots competition: BerryBots Little League. :-) A month to write a duelist and win a t-shirt or $25 Amazon gift card. I didn't want to force things, which is why it's taken this long to do this... But finally a couple people brought it up in the forums, so here we are!
Most exciting part for me is you can develop and submit your bot right from the web UI at PlayBerryBots.com/battle.
How robust are your servers? Do you think you could handle a RaspberryPi.org blog entry?
Haha, I really doubt it. :-) But I dunno. It would be a good problem to have. :-P It's on Amazon EC2 and on a single instance of their smallest tier. But theoretically, it shouldn't be too hard to scale it up with EC2, since that's basically their core strength as a platform. It's a pretty simple app.
Er, I think berrybots.com could handle it. That site's pretty lightweight and on its own VPS. But playberrybots.com is on EC2 and obviously running matches server-side could slow down pretty quickly. Most matches only take 1-2s to run, so it should be able to handle at least a few people at a time.
Got my t-shirt yesterday! http://imgur.com/a/TlZMS :-)
on http://www.dijitari.com/void/robocode/ there are unreleased versions of diamond and dookious and other robots why didn't you release them voidious WHY
here are two versions of dookious released after 1.573c http://www.dijitari.com/void/robocode/voidious.Dookious_1.59.jar http://www.dijitari.com/void/robocode/voidious.Dookious_1.60.jar
also here's http://www.dijitari.com/void/robocode/voidious.Dookious_1.58.jar http://www.dijitari.com/void/robocode/voidious.Dookious_1.581.jar http://www.dijitari.com/void/robocode/voidious.Dookious_1.582.jar http://www.dijitari.com/void/robocode/voidious.Dookious_1.583.jar http://www.dijitari.com/void/robocode/voidious.Dookious_1.583b.jar http://www.dijitari.com/void/robocode/voidious.Dookious_1.584.jar http://www.dijitari.com/void/robocode/voidious.Dookious_1.585.jar http://www.dijitari.com/void/robocode/voidious.Dookious_1.573b.jar http://www.dijitari.com/void/robocode/voidious.Dookious_1.573.jar
Yes, all of those versions were in fact released. The reason the versions of Dookious newer than 1.573c are not currently in the rumble, is because they were experiments that performed worse than Dookious 1.573c.
Please stop getting worked up things without even looking into the history of it.
Hey dudes - I'm gearing up to release BerryBots v1.1.0 with a full GUI for Mac/Linux/Windows. If any of you would be willing to grab this release candidate and test it on your systems, I would be super duper grateful! I mainly just want to know it launches and runs battles ok, but obviously any/all feedback is also welcome.
I just downloaded the windows version. It is launching and running battles alright, so far.
I'll also add that it has a very nice, simplistic interface.
I get the following error with the Linux 64 bit version:
[andrew@host-110-86 berrybots]$ ./bbgui ./bbgui: error while loading shared libraries: libpng12.so.0: cannot open shared object file: No such file or directory [andrew@host-110-86 berrybots]$ sh berrybots.sh ./bbgui: error while loading shared libraries: libpng12.so.0: cannot open shared object file: No such file or directory
Great, thanks for testing this! What Linux distro/version is this?
I think the cleanest solution is probably just to ask people to install libpng (and/or check that it's installed from an install or run script first). Maybe testing a few other distros to figure out what needs installing would be a good idea before release. I know it doesn't require anything on the last two Ubuntu's.
Sweet, was able to duplicate this on Mageia 2. Will try to either get to the bottom of it or maybe just building it on non-Ubuntu will solve the problem. Right now looking to test on Ubuntu / Mageia / Fedora - still curious to hear what you're using.
Oops, I forgot I didn't say. I'm using fedora 18. I installed libpng, but then hit more libraries that I was missing. I'll try it on Windows today.
After some research, I think the problem is that you have a newer version of libpng (like libpng14 or libpng15), not that you don't have libpng at all. My Mageia 2 install has libpng15. I know what libpng is but this whole problem is pretty new to me, so I have to figure out what's the right way to address it.
Thanks man!
Versions after libpng12 broke compatability with libpng12 in notable ways. For this reason, libpng12 still has maintnance release, and many distributions (i.e. Arch Linux, Ubuntu, Debian, and after checking, Fedora 18 too) have pakages for both libpng12 and the newst version of libpng, which can be installed simuntaneously without conflict.
AW: You should be able to install a "libpng12" package from your package manager I believe.
I have no attachment to any libpng versions, so I guess the best move is to compile against whichever version works most commonly across default installs of Ubuntu / Mageia / Fedora, probably libpng14. Is that what you'd recommend?
No clue about Mageia, but Fedora and Arch provide both libpng15 and libpng12 (libpng12 isn't installed by default, but should be easy to install). Ubuntu and Debian however only provide libpng12. To me libpng12 looks like the safest for binary releases of software for now. Well, safer still is statically linking libpng, but yeah.
Thanks Rednaxela, this has been really helpful. If installing libpng12 is simple on most distros that don't ship with it, that sounds like the way to go.
I could still have a problem, though. wxWidgets will dynamically link to GTK, which dynamically links to system libpng. So I could then be linking to libpng12 and libpng15, which seems bad, but maybe it's not. I'll do some tests. Maybe I just have to build a separate binary release for Ubuntu/Debian and Fedora/Arch/Mageia, which isn't really a big deal.
Yeah, I installed libpng 1.2 and now I need libGLEW 1.8 (fedora's package mannager only has 1.7) It runs fine on Windows though!
Cool, thanks AW! Funny, Windows was the platform I was most worried about. :-) Btw are you on Windows 7?
I guess offering binaries compiled on several common Linux distros is probably a fairly safe way to go. Packaging as an RPM may help in defining/managing these dependencies, too, though.
It works here as well. But does seem to have an unused console window. Windows tends to be fairly straight forward once you know what it needs to have. Namely dlls.
On a side note, I have noticed random bot hitting itself with its own laser shots.
Compiled on Fedora 18 64-bit, if you want it: [1] (It does give a harmless warning about receiving unicode text input that I am not sure I can fix.)
But hopefully I can figure out a way not to have to offer per-distro Linux binaries...
Bah, I'm not finding much to indicate I can make this any easier than separate binaries for a few major Linux distros and providing source / good build instructions (which I have) for anyone else to compile it themselves.
Still very open to advice from any resident Linux gurus tho. :-)
Mainly what I have seen lately is many programs have moved to being managed by a package manager. I have opinions on that. But other programs such as audacity offer a few different packages for a few major distro's and the source code.
If you statically link libpng and libglew, I'd expect that to work better across distros probably Voidious.
Aren't the odds high that those each link to the system specific version of something else which has the same problem? I'm also concerned about ending up linking in two versions of libpng and libglew, since something like GTK would dynamically link them in.
Those two would link to other things yes, but those (libc, zlib and and OpenGL itself) have a very stable ABI so far as I know, and likely would not have the same problem. If really worried though, one could probably staticly link everything except OpenGL and wxWidgets. GTK dynamically linking a different one in shouldn't be a problem I think, it should coexist just fine.
I tried out the fedora binary and it works now. Thanks!
I noticed this game has a lot of fast-moving bright colors--many more than Robocode.
It would be wise to put in some kind of legal disclaimer, such as:
"Do not use this product if you have been diagnosed with epilepsy or any other photosensitive medical condition."
Hi mate,
Right now i have not much time to check it out, but on my macbook the app crash instantly.
I can send you the full crash report if you want but maybe the first lines help you to see whats wrong
Date/Time: 2013-03-06 06:50:35.838 +0100 OS Version: Mac OS X 10.6.8 (10K549) Report Version: 6 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 ??? 000000000000000000 0 + 0 1 libwx_baseu-2.9.4.0.0.dylib 0x00000001926c7e8b wxEntry(int&, char**) + 11 2 voidious.BerryBots 0x00000001919610b6 0x19195d000 + 16566 3 voidious.BerryBots 0x000000019195ece4 0x19195d000 + 7396 Thread 1: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x00007fff83bcec0a kevent + 10 1 libSystem.B.dylib 0x00007fff83bd0add _dispatch_mgr_invoke + 154 2 libSystem.B.dylib 0x00007fff83bd07b4 _dispatch_queue_invoke + 185 3 libSystem.B.dylib 0x00007fff83bd02de _dispatch_worker_thread2 + 252 4 libSystem.B.dylib 0x00007fff83bcfc08 _pthread_wqthread + 353 5 libSystem.B.dylib 0x00007fff83bcfaa5 start_wqthread + 13 Thread 2: 0 libSystem.B.dylib 0x00007fff83bcfa2a __workq_kernreturn + 10 1 libSystem.B.dylib 0x00007fff83bcfe3c _pthread_wqthread + 917 2 libSystem.B.dylib 0x00007fff83bcfaa5 start_wqthread + 13 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000000109f80 rbx: 0x0000000000000000 rcx: 0x0000000000000001 rdx: 0x00000000000fc080 rdi: 0x000000000013b7e0 rsi: 0x0000000000100000 rbp: 0x00007fff5fbffa90 rsp: 0x00007fff5fbff978 r8: 0x0000000000000000 r9: 0x000000000013b300 r10: 0x00007fff8ab4b630 r11: 0x0000000000000000 r12: 0x00007fff5fbffa08 r13: 0x0000000000109f60 r14: 0x0000000000000000 r15: 0x00007fff5fbff9f0 rip: 0x0000000000000000 rfl: 0x0000000000010206 cr2: 0x0000000000000000
I plan to give it a deeper look on weekend.
take care wompi
It just won't work on anything before 10.7. I tried to get it compatible with 10.5 or 10.6 but couldn't get the main gfx library (SFML 2.0) to compile with that. Still like 30% of folks on 10.6, so that kind of sucks... But I'm a late updater and even I'm on 10.8 now so hopefully most people will be on at least 10.7 pretty soon.
Thx for giving it a shot tho!
Actually, I am going to take another pass at 10.6 compatibility. I'll let you know if I have any success and maybe I can get you to try it again...
Argh.. i didn't read the requirements, sorry that was my bad. If it comes to updates, I usually switch to the next version if there is a program I want and which needs those requirements. And because I desperately want to have a look at BerryBots, it is a perfect time to get "Lion", I guess.
Please don't bother with 10.6 compatibility, it's not worth the time.
Take Care
Wow, I'm honored. :-)
I'm all in favor of being as backwards compatible as possible, so I gave 10.6 compatibility another shot, but I couldn't get it working. I got wxWidgets and SFML to compile with the 10.6 SDK and 10.6 target version, but my XCode project itself still fails with some linker errors I can't figure out when I set the target to 10.6. Seems like I'm close, but I'm at a loss...
Hi mate.
Finally I got my system upgraded. It was quite a tough task, because "Lion" is not on sale anymore and thats the only upgrade my old macbook can handle. Anyway, one data loss and near heart attack later I figured out how to bring the system back to work - I guess you owe me a beer :).
I'm not sure if I need more updates (there are still some shown in "Software Update"), because BerryBots is still crashing. It starts, and I can choose a stage and bots but if I hit start game the App ends.
Maybe you have an idea what could be wrong. And if you need more info, I can give you the whole crash report if needed.
take care
Identifier: voidious.BerryBots Version: 1.1.0 (rc1) Code Type: X86-64 (Native) Parent Process: launchd [177] Date/Time: 2013-04-01 20:25:54.514 +0200 OS Version: Mac OS X 10.7 (11A511) Report Version: 9 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000004
EDIT: Sure I will try it out right away.
Argh! For the sake of testing clarity, could you try this one? [1] I know that's the exact v1.1.0 that my brother tried on his 10.7 iMac. You could also try v1.1.3 from the downloads page.
If that doesn't work, please do post the whole crash report, or email it to me (voidious at gmail). Thanks man! Hopefully you didn't upgrade to Lion in vain!
So far it's working on my Windows XP box. But the first match I tried seems potentially buggy. I put Drifter and MyFirstShip into lasergallery. Is lasergallery just meant for 1 ship? The two ships seems to start on top of each other. Is robot positioning random? Or controlled by the level perhaps? Sometimes Drifter drifts away and it runs okay, but most of the time Drifter holds still while MyFirstShip totally spazzes out, always appearing to touch Drifter but bouncing from side to side like mad. [1]
A couple other things. I realize this is still an early release, but is there a way to set descriptions for stages? One of the first things I noticed was that I had no idea what the stages were meant for, outside of guessing based on the file name. It would be nice if their was a way for stage designers to set descriptions for their stages that could be read by users when picking a stage to run.
Also, when the New Match window is open, the BerryBots window where the battle plays out doesn't repaint. A minor thing, and it's interesting the patterns I can make by running other windows over top of it, but I doubt that's an intended feature. :-)
I think I've fixed the repainting issue, but I don't have any machine where I can reproduce it. If you could try this on XP sometime (no rush) I'd love to know if it's fixed: [1] (I'm actually kinda of surprised/excited BerryBots even works on XP...)
Also took a pass at the stage description / wrong number of ships stuff.
I'd say the repainting issue is fixed. It still happens while dragging the New Match window over the battle window, but then it repaints as soon as you stop dragging the New Match window. And it doesn't happen at all anymore when dragging other application windows. I wouldn't bother with it anymore, especially if it just happens on XP.
Another bug or perhaps just me not understanding the level, on the drift stage I put WallHugger and Drifter in it. WallHugger wanders around the walls while Drifter tries to go up the middle. Drifter always wins, but WallHugger always makes it to and around the zone at the top before Drifter gets there. If the goal is to get to the zone first, WallHugger should be winning. Or maybe I misunderstand the goal?
Would you rather me be spamming all this feedback in your BerryBots forum? Or do you want to keep it here among Robocoders for the moment. I have been assuming the latter.
Either is fine. Here probably makes more sense for now. Huge thanks for all the testing/feedback!
- Not knowing which bots/stages work together is definitely the biggest source of confusion I see at the moment. Indeed lasergallery and drift are only designed for one ship. I'm not sure the best way to address this, but I think I'll raise the priority on this and get it fixed before release.
- I could just read any comments at the top of the stage file as the description (they do all have such comments) and put it in the UI somewhere, like stage preview.
- In addition, each of the 1p stages could just destroy any player ships beyond 1 and print a message to the screen. (The mazes do print a message actually, but not the others.)
- It's no substitute for fixing this in-game, but if you do want to read about the stages, they all have write-ups (though some with out-of-date source code) at the wiki: [1]
- Specific start positions can be defined by the stage, any beyond that they're random.
- Did you mean literally "on top of each other", like they were overlapping and stuck? If so, that's a bug I'd love to reproduce. It's fairly low level in the game engine that this should never be possible.
- The New Match dialog is styled to be a modal thing. The main window doesn't accept input or update until/unless you close New Match. I'd never seen any visual artifacts like that though (just tested here Mac and Win 8). I'll see if I can reproduce somewhere and try to figure out a fix. This happens even with New Match retaining focus the whole time, just covering/uncovering part of the main window?
Ok, I see what's happening now on lasergallery. The first ship nabs the built-in starting position, but the second ship is what the stage saves as the "player ship", and the stage tries to set its position to that spot each tick.
I have a bunch of thoughts on the ship/stage compatibility issue. I'm trying not to over-engineer it, but it is lacking right now for sure.
- The stage description would go a long way, I think that's a must-have.
- In general, I think for ships/stages you download yourself, you have a good idea what you're doing, and engineering solutions to ensure compatibility are probably overkill. In Robocode, I don't see a lot of confusion trying to run 1v1 bots in Melee battles, or Movement Challenge bots in regular battles, etc. So I partly see this as an issue specific to sample ships/stages, or at least magnified there.
- An API to let the stage set min/max ships is an idea I'm considering. This would let me fail fast and not even start a misconfigured match. My issue with this is it's still not sufficient (Snail would be pretty dumb on lasegallery).
- On my to-do list is to let a stage set a tag for the rule set it's using. Just a string, like "battle" or "maze". Then ships could check this value in case they wanted to behave differently based on the rule set. E.g., a team might support "battle" and "ctf". I could expand on this by letting ships define what rule sets they support, though I'm on the fence about that.
I think for now I'll just do:
- Show stage descriptions in New Match dialog somewhere.
- Update sample stages to gracefully handle too many ships.
I was playing kind of dumb in my earlier testing, but doing so helps ensure everything is as user friendly as it can be. I like the idea of a stage being able to indicate min/max bots, though I see your point about it still being possible to pair robots with stages they were not meant for.
I'm not sure if this is even possible, and I certainly wouldn't worry about it anytime soon, but the one thing that I think would be *really* cool in a game like BerryBots, Robocode, or similar would be if they provided a way for a user to paint his/her own ship/tank/robot; something more than just setting colors. This would greatly enhance interactive play. I'm not sure how many others would be into this idea though. Just an idea for the backburner.
Kind of interesting, I put 2 Jouster's into the stage joust and within just a few runs experienced two different tie scenarios. The first one, both Jouter's ended up head to head trying to push each other to the opposite side, neither budging at all. The second one, apparently they both bounced out to zone at the same time, and it just said Game Over instead of declaring a winner. Luckily, I didn't have to put a quarter in my computer to play again (that's what it made me think of).
I tried to run BerryBots but instead I got this error message:
"The program can't start because sfml-graphics-2.dll is missing from your computer. Try reinstalling the program to fix this problem."
It worked fine before.
Hmm - do you see sfml-graphics-2.dll in the same directory as BerryBots.exe?
Aha!
When I open it straight from Explorer, it works fine. But, when I go through the shortcut I put on my desktop, I get the error. Are you making some kind of reference to that DLL that wouldn't work through a desktop shortcut?
You are correct, I am using Windows 7. (I've tried linux, but I haven't gotten the hang of it yet.)
Ah! Ok, that makes some sense. Really glad you uncovered this one!
It needs the DLLs somewhere that the system will find them. Having them in the same dir as the .exe works, and lets me avoid needing an installer or polluting your system with DLLs (or so I thought). But maybe I do need an installer after all, or to static link everything (all the required DLL code goes right into the .exe, basically).
The desktop shortcut works for me on Windows 8. It's definitely a shortcut on your desktop right? Not a copy of the .exe? I think if you want a work-around for now, you could copy the DLLs into Windows\System32, or add the BerryBots directory to your PATH.
You're right. I actually copied the .exe file to my desktop folder. It's fixed now.
It probably would be a good idea to use an installer or have all code in one file, in case somebody else does something stupid. ;)
Oh cool, good to hear. I am looking into installers now, Inno Setup looks promising.
But I may put the installer off to next release if it's not totally breaking anything. The bigger piece of work here is that with the code in "Program Files", I'd also want to move bots/stages to somewhere else (configurable, eg My Documents\BerryBots), and then need a way to find them (like via the registry). I already have to do it this way on Mac OS X, and it does seem like a better setup. Right now it just knows to look in the subdirectories on Windows/Linux.
Just wanted to give a big thanks to all you dudes for helping me out with this. It would've been an ugly, bug-ridden first release of the BerryBots GUI without you. I owe y'all some testing whenever you need it. :-) Details on v1.1.0 if anyone wants to check it out: [1]
Tried it on Debian wheezy (current stable) 32 bits.
Upon start up I see the following error message: error while loading shared libraries: libGLEW.so.1.6: cannot open shared object file: No such file or directory
Quick search in the Debian package repositories shows that it has only libglew1.7
This thread is pretty out of date - which version was it you tried? I've got some Ubuntu binaries for v1.3.0 on the downloads page which might work better for you. [1]
If not, there are compilation instructions on the wiki, and I'd be happy to figure out building a Debian binary and adding it for the latest version. It's not hard to compile, but then I've also done it a zillion times. :-)
I'm not exactly an expert on packaging Linux apps, so feedback is more than welcome. Probably the model I trust most is that Chrome releases:
- 32 bit .deb (For Debian/Ubuntu)
- 64 bit .deb (For Debian/Ubuntu)
- 32 bit .rpm (For Fedora/openSUSE)
- 64 bit .rpm (For Fedora/openSUSE)
But so far it's just a simple binary in a zip.
I tried the old version BerryBots for Ubuntu/Mint 32-bit v1.1.0-rc1
if I download berrybots_ubuntu-32bit_1.3.0.tar.gz than for debian I now miss libGLEW.so.1.8.
The debian stable has libGLEW.so.1.7
Quick attempt to compile it myself from the source failed, since I did not installed a lot of prerequisites.
Why one would need cmake, if compilation is designed for make?
Building SFML requires cmake. Those instructions build everything from source.
I'll install Debian "wheezy" 32-bit here and put together a binary for you, and then move "better packaging on Linux" way up on my to-do list. It's hard enough getting someone to install anything at all when so much stuff happens through a browser these days, so getting this right is pretty important...
Alright, getting Debian 32-bit installed now, but I need to get to bed. So probably won't have it posted until sometime tomorrow night, if you're still interested. Then I'll see about packaging as .deb and .rpm as a hopefully better solution.
This one worked without a hitch.
Seeing replay in a browser is super cool, but a bit unexpected :)
I use to make a few debian packages for my own needs. It is not too complicated, especially if you are not to worried about 100% complaint to the distribution policy. But it should be fine for non official packages.
If you can do debian that mean that ubuntu is done as well.
I feel silly to ask but how to see updates and replies to the messages? I used the recent changes page but this seems to be an overkill and not very convenient.
It seems your client has a bit of an obsession with certain pairings (look at that battles count!) and I'm not sure why - could you purge your files and restart? Thanks.
Say, any of you Robocoders have a fast quad-core machine (like Core i5/i7 or comparable) and feel like advising me? I'm considering buying a Core i7 (2600k) quad-core box that would mainly (for now) be for Robocode. But I'm wondering how much of a speed increase this will offer me.
- How long does a minimized 35-round battle of Diamond vs itself take? (Maybe run one then "Restart", if that helps JIT things up...) I'd need to know the Diamond version, Robocode version, and what CPU you've got to make full sense of that info.
- How much of a speed hit do you take per battle when running 4-threaded RoboResearch? Ie, if a given battle takes 60 seconds when you run single threaded, does it still take 60 seconds when you run 4 Robocodes, or how much of a hit does it take?
This would be a huge, geeky indulgence, so I'd love to get some idea what I'd be getting for my money if I actually pull the trigger. =) Thanks!
I have a much AMD Phenom II x4 @ 3.6 ghz. My AMD is considered slower then say a higher end i7.
You can see here for a ALU comparison. http://www.tomshardware.com/charts/desktop-cpu-charts-2010/ALU-Performance-SiSoftware-Sandra-2010-Pro-ALU,2408.html
Mine is closest to the AMD Phenom II X4 975 Black Edition on this chart (Overclocked 965 to 3.6). Since Robocode is math heavy you can see the result each chip gets.
For a real performance reference see the amount of rumble I can perform an a given period (4 clients).
On this chart, the 2600K gets over twice the score of my CPU. 114.30 vs 55.0.
Well, pretty sure I've done 100k battles in a month, so this tells me it's 2.3x as fast. I probably wouldn't shell out $700 for double the Robocode power, but I'm guessing it's much more of a multiplier than that. Also, I reckon performance could scale differently with simple bots (many of your rumble battles) vs high-end bots, which are surely much more memory-intensive, and thus perhaps not as much sped up by an increase in raw CPU power.
So I'd still really love to know the time a certain battle takes and how close to linearly your Robocode power scales with # of cores...
I have done about 164359 battles so far, so about 41090 per client, 10 days in about so multiply that by 3 for a full month, only a total of about 123,270 per client. But that cpu is about twice mine in math, so estimate around 250,000 per client over a period of a month, so 2.5 times yours with a single client (even if you have more then 1 cpu, it only uses 1 cpu worth of cpu time). Times 4 for 4 clients equals about 1 million. This totals to about 10 times yours if you only run a single client, or 5 times with two.
All things being equal. However to answer you're original question. I do not know the exact amount of time it takes, but it isn't very long. Also as long as I stick to only 4 threads, the speed is equivalent to running only 1 thread if my computer is doing nothing else.
Because of intels hyperthreading, you may be able to get away with 5 or 6 threads without much overall hit.
Well, thx for the info. That assumes Robocode power scales linearly with benchmark scores, which is something I don't trust, or I wouldn't even be asking this. :-) And comparing RR client battle count is a very rough estimate, too. (Maybe I've done 150k? Don't remember, and who knows what bots or if that was full time...)
If anyone wants to serve up some cold hard battle times and single vs 4-thread comparisons, I'd still much appreciate it!
Just in the last week I got a AMD Phenom II X6 1090 at 3.2GHz here. Sure, it's slower per core than a high end i7 like the 2600K, but on the other hand 1) The CPU is practically half the price of a 2600K, and 2) six cores rather than four is nothing to sneeze at for robocode purposes.
Running Diamond 1.6.7 versus itself, 35 rounds:
- 50.265s average (Trials: 49.735s, 43.292s, 51.542s, 53.741s, 50.764s, 52.277s, 52.839s, 47.930s)
- This is without GUI, and including robocode startup time (about 1-2 sec)
Running Diamond 1.6.7 versus itself, 35 rounds in 2 separate robocode instances:
- 29.305s per battle
- 58.611s per instance (Trials: 58.016s, 61.019s, 56.540s, 59.753s, 55.980s, 62.418s, 57.420s, 57.741s)
- Robocode startup time increased to 4 seconds. This would not be a factor in a battle runner which runs multiple battles in the same JVM!
Running Diamond 1.6.7 versus itself, 35 rounds in 4 separate robocode instances:
- 16.294s per battle
- 65.174s per instance (Trials: 65.207s, 66.350s, 67.326s, 66.458s, 62.473s, 62.683s, 65.189s, 65.709s)
- Note, robocode startup was already seemed highly parallel, because robocode startup now took up to 8 seconds for one instance! As such, about 6 seconds of the increased time can be attributed to robocode startup.
Running Diamond 1.6.7 versus itself, 35 rounds in 6 separate robocode instances:
- 15.736s per battle
- 94.417s per instance (Trials: 91.325s, 92.572s, 93.809s, 94.710s, 95.344s, 95.738s, 97.421s)
- The gains seem to flatten out about here. One note is, because one instance of robocode on it's own uses something like 115% of a core, I should reach a CPU limit at 5-ish instances, not 4-ish, so I suspect I'm hitting a memory bandwidth bottleneck
One instance of Robocode can use more than 115% of a core. It oscilates between 100% and 200%. It is expected to see a performance decrease in benchmarks when you run more instances than half of your cores (all instances using 200% at the same time).
Why would it use 200%? According to Pavel, different robots can run on different cores, but they are synchronized so only one is running at once, basically capping your actual performance at the speed of one core. So it should be 100% + some JVM / Robocode engine overhead, I'd think.
That overhead happens about 30% of the time, so an instance uses about 130% cores average. But there are peaks of 200%. When I run 3 instances on 4 cores, they use all cores most of the time, but you see one idle core sometimes (and it´s not uploading).
When running test beds, I run one instance per core (and disable turn skipping), so all cores are used all of the time.
Running a benchmark restricting each instance to a single core would remove that parallel overhead.
Oh yeah, that does remind me, I do have some pretty wicked memory in here. Tuned just so to get maximum speed out of it (which usualyl in most things I do effects overall feel of speed my computer has more then pure cpu power).
If I recall my DDR3 is running at 1600 with timings 7-8-7-20.
Quick little note to compare, DDR3 running at 1600 here too, but with 9-9-9-24 timings. Anyway, at 4 threads I don't suspect I'm hitting memory bandwidth bottlenecks, whereas it looks like I may be at 6 threads.
The cores all share an L3 cache too... I wonder if it's worth the extra $100 to get 1600 RAM (and the mobo that supports it). I've been buying Macs for the last 5 years, I feel like such a noob examining this kinda thing again. =)
Extra $100 for 1600MHz RAM? My ram only cost $50 for 8GB, and I didn't see notably cheaper in slower ram really. As far as motherboard, mine was a little fancier than some others, but it was only about $115. So... It shouldn't cost $100 extra for 1600MHz ram.
Yeah, I'm looking at barebones kits which default to a pretty cheapy motherboard, so most of that was to upgrade to a decent one. Prolly worth it anyway, and while it's not quite Apple-level gouging on memory itself, I guess it's universally true that I should buy/install my own. ;)
That's great info Rednaxela, thanks! Btw, how are you timing the battles so precisely, and measuring JVM startup time? I wasn't expecting 3 decimal places. =)
I'm getting 79s / battle single-threaded, 42s / battle with duel-threaded on my MacBook Pro (Core 2 Duo 2.8 GHz), just trusting the times output by RoboResarch. So it looks like you're almost 3x as fast, which is pretty darn close to the PassMark scores (6053 vs 2029). So maybe I can hope for 5x as fast with the 2600k after all, which would be fabulous!
For timing I'm just running the *nix command "time ./robocode.sh -nodisplay -battle battles/diamond.battle". For Robocode startup time (including JVM startup but not just that), I'm just roughly estimating by watching the command line output.
Here's some fun... I tried using my motherboard's "automatic overclocking" functionality where it autonomously tries to see how high it can clock things, and it decided it could get it up from 3.2GHz to 4.2GHz (+30%). Both Windows and Linux booted fine, so initially I thought it was stable, and it ran one robocode battle at a time fine, but as soon as I tried to run multiple in parallel, the JVM kept crashing and it became apparent that the +30% overclock was not stable despite OSs booting fine. Interesting thing was, the +30% overclocking seemed fine thermally even with the stock cooler, it just had other stability issues.
I'm now running a more modest +12.5% CPU overclock, and I got the Diamond versus Diamond runs down to 12.837 seconds-per-battle, running 6 in parallel. This is still with memory running at 1600MHz, so I guess what I was hitting before wasn't purely a memory bottleneck anyway. Also, huh, 22.5% increase from a 12.5% CPU overclock...
I ave a Intel 2600k(3.4Ghz, 4.4Ghz w/ Turbo Boost) and I did a quick benchmark for you. Using Diamond 1.6.8, no GUI, using Powershell to measure.
1 Instance, 35 Rounds:
- 48.1 seconds Total
2 Instances, 35 Rounds:
- 47.28 seconds Total - 24.17 seconds per Battle
4 Instances, 35 Rounds:
- 1:05 Minutes Total - 15.2 seconds per Battle
8 Instances, 35 Rounds:
- 1:31 Minutes Total - 11.37 seconds per Battle
Though I noticed that Powershell had a small delay between creating each instance, not sure why. I haven't had a look at RoboResearch, so maybe I'll have a look at that later.
Not sure if it just my benchmark setup, but if Rednaxela could send over the benchmark setup, maybe I'll be able to test it in the same way.
I'm just running the unix command "for i in {1..8}; do sh -c "time ./robocode.sh -nodisplay -battle battles/sample.battle > /dev/null &"; done" with the battle file set to run diamond versus diamond 35 rounds. I then take the average time outputted from each "time" command and use that.
Hmmm... 8 instances... I just tried 8 instances here and got the following result on my Phenom II 1090 X6 that's clocked up from 3.2GHz to 3.6GHz..... 1m35s per instance average, 11.90 seconds per battle.
Voidious: It seems like the 2600K may not be as fast for robocode as non-robocode benchmarks would lead you to believe?
Yep, I was guessing closer to 5-6x - the PassMark score is 5x my current CPU, and I figured if anything Robocode would scale better to more cores than the average benchmark. I really appreciate all the real world info! Definitely impacting my purchasing decision.
Oh, and one little warning, when I run the X6 1090 at the OCed 3.6GHz, with 8 robocode instances, and stock cooling, it pushes the CPU temperature awfully high (61C when the CPU is spec'ed for a maximum of 62C). Pondering clocking it back to 3.2GHz now that I noticed that, haha.
On completely unrelated thing, maximum temp is 62? That's pretty low... I know I have pushed my Core 2 P8400 (mobile processor) to 95C before the system shut down to protect the cpu. 60C is my standard CPU temp when I am not in air-conditioned room (50C in air-conditioned room). My graphics also goes up to 108C without problem...
Mobile processors are spec'ed completely different for heat from what I've seen (My old Core2Duo laptop was spec'ed for up to 105C for instance)
Aha, I think I found why...
Compare i7-2640M (mobile) and i7-2600K. Notice that the mobile part is specified with "Tjunction" whereas the desktop is specified with "Tcase". It appears desktop CPUs use temperature measurements of the packaging temperature whereas mobile parts directly measure the die temperature. Fun stuff :)
Thanks Rednaxela! I've rerun the test without the little delay between starting up instances, and calculated it the same way (if I understood it right) and this is what I got 1:17 per instance average, 9.70 per battle.
I don't have *nix to test on my home computer at the moment. My server has *nix, but it's stuck with a Quad-core Xeon in it ;)
Cool, thanks Cuoq! I wouldn't have expected hyperthreads to help so much to scale beyond 4 threads. So it looks like up to 4x as much Robocode throughput as I have now is a pretty solid estimate. Now I just need to grapple with whether that's worth a few hundred bucks... =)
I re-ran some tests here with Diamond 1.6.7 on a fresh Robocode 1.7.3.2, using the time command like Rednaxela. I'm seeing about 57s/battle when I run one at a time and 39s/battle when I run 2 in parallel (just duplicating the command, adding &, taking the max elapsed). I forgot my RoboResearch dir had a cranked CPU constant.
Belated thanks to everyone for the input... Finally pulled the trigger on a Core i7-3770 this weekend. I'm so stoked! Waiting a few hours for dev versions of Diamond to run 500-1000 battles against my test bed has been unbearable lately!
FYI, running Diamond 1.6.7 vs itself as above, on Robocode 1.7.4.0, I get:
- 1 instance: 65s (yes, 1 is taking longer than 4 right now, wtf?)
- 4 instances: 4th one finishes at 53.34s. (~13.34s / battle)
- 8 instances: 8th one finishes at 80.47s. (~10.11s / battle)
One thing I find very odd is it's setting the CPU constant higher here than on my MacBook Pro (5.8m vs 4.0m). Seems like running Diamond 1.7.37 vs itself is about twice as fast, so I'm not sure what's up with that.
Anyway, enough screwing around, time to kick off some RoboResearch. =)
Probably Intel Turbo Boost is confusing the Robocode engine. The engine assumes CPU speed is constant, which is not true with Turbo Boost. CPU constant is being measured while the clock is still low.
I'd guess dynamic clocking (i.e. turbo boost) also explains the 4 instances running faster than 1, since 1 instance may not be setting the cpu usage high enough for the CPU to go to it's full clock speed.
I figured you guys were right, but I tried running 1-3 threads of RoboResearch to trigger any clock increase and recalculating the CPU constant while that was going on, and it came out even higher. My best guess is the benchmark used to calculate CPU constant is just way more optimized on Mac and/or on Apple's JVM than in Ubuntu/OpenJDK. Still sounds like a good bet on the slow result for 1 instance though.
Try running background threads in lower priority, or clients in higher priority.
But never tried OpenJDK, I´m using Oracle/Sun Hotspot JVM here.
I'm hitting a bizarre bug in either Robocode or my JVM. Basically, I calculate the enemy's position each tick, and on successive ticks, the distance between the positions is greater than 8, which should be impossible. Frequently it's like 11 or 15, occasionally more like 40 or 70. Can anyone else duplicate this? It seems to only happen when running minimized, so I don't think it's any issue with my code. And I'm checking that e.getTime() is only 1 apart, so it's not skipped turns. And it is happening in 1v1 with a radar lock.
For a while this came in the form of thinking my freshly rewritten scan interpolation had a bug. (Which seemed a lot like the "bug" in the old interpolation code...)
This is the right calculation of enemy location, right?
Point2D.Double enemyLocation =
project(new Point2D.Double(getX(), getY()), Utils.normalAbsoluteAngle(
e.getBearingRadians() + getHeadingRadians()),
e.getDistance());
public static Point2D.Double project(Point2D.Double sourceLocation,
double angle, double length) {
return project(sourceLocation, Math.sin(angle), Math.cos(angle), length);
}
public static Point2D.Double project(Point2D.Double sourceLocation,
double sinAngle, double cosAngle, double length) {
return new Point2D.Double(sourceLocation.x + sinAngle * length,
sourceLocation.y + cosAngle * length);
}
Hm, the code looks fine to me. Can you perhaps have Robocode save a replay file, and make note of the ticks your robot notices things moving too fast? That way we could compare this to Robocode's internal representation of things, so it would be easier to tell where the problem is.
Wow, good call on the replay file, never looked at one of these before. So it's pretty clear something about the ScannedRobotEvent from the previous tick was wrong - I suspect it's an old ScannedRobotEvent with the wrong time. (I noticed when doing some unit tests that SRE's don't take time in the constructor, guessing they just get it from Robocode engine?)
So here's data I collected from successive ticks in my onScannedRobot:
IMPOSSIBLE: Enemy traveled 23.960320897363644 Round: 3 Ticks: 172 - 173 enemy 1: Point2D.Double[672.296759493604, 268.4670336402143] enemy 2: Point2D.Double[670.937650031281, 292.3887768662851] dista 1: 645.4602889259664 dista 2: 643.0225154806783 myloc 1: Point2D.Double[28.46985554539938, 314.3571451334465] myloc 2: Point2D.Double[28.067090293868013, 306.3672903067204] heading 1: 3.1650118725284133 heading 2: 3.1919596027245163 bearing 1: -1.5230587899322943 bearing 2: -1.5994228013460932
And here's data for those ticks from the replay:
<turn round="3" turn="172" ver="1"> <robots> <robot id="0" vsName="ScanTester*" state="ACTIVE" energy="84.0" x="28.46985554539938" y="314.3571451334465" bodyHeading="3.1650118725284133" gunHeading="1.6650151674145717" radarHeading="1.78588401972672" gunHeat="1.1999999999999997" velocity="8.0" teamName="voidious.ScanTester*" name="voidious.ScanTester*" sName="ScanTester*" ver="2"> <debugProperties/> <score name="voidious.ScanTester*" totalScore="116.0" ...<snipped>.../> </robot> <robot id="1" vsName="Phoenix 1.02" state="ACTIVE" energy="70.10000000000005" x="671.8062575598343" y="284.43607160572503" bodyHeading="3.0567611234507766" gunHeading="4.4695696509271885" radarHeading="4.843960979524177" gunHeat="0.0" velocity="-8.0" teamName="davidalves.Phoenix 1.02" name="davidalves.Phoenix 1.02" sName="Phoenix 1.02" bodyColor="FF404040" gunColor="FF404040" radarColor="FF00FFFF" ver="2"> <debugProperties/> <score name="davidalves.Phoenix 1.02" totalScore="284.74" ...<snipped>.../> </robot> </robots> <bullets>...<snipped>...</bullets> </turn> <turn round="3" turn="173" ver="1"> <robots> <robot id="0" vsName="ScanTester*" state="ACTIVE" energy="84.0" x="28.067090293868013" y="306.3672903067204" bodyHeading="3.1919596027245163" gunHeading="1.641953082596119" radarHeading="1.498022145465518" gunHeat="1.0999999999999996" velocity="8.0" teamName="voidious.ScanTester*" name="voidious.ScanTester*" sName="ScanTester*" ver="2"> <debugProperties/> <score name="voidious.ScanTester*" totalScore="116.0"...<snipped>.../> </robot> <robot id="1" vsName="Phoenix 1.02" state="ACTIVE" energy="68.11000000000006" x="670.937650031281" y="292.3887768662851" bodyHeading="3.0328022439880638" gunHeading="4.449386601269055" radarHeading="4.698449464559538" gunHeat="1.298" velocity="-8.0" teamName="davidalves.Phoenix 1.02" name="davidalves.Phoenix 1.02" sName="Phoenix 1.02" bodyColor="FF404040" gunColor="FF404040" radarColor="FF00FFFF" ver="2"> <debugProperties/> <score name="davidalves.Phoenix 1.02" totalScore="284.74"...<snipped>.../> </robot> </robots> <bullets>...<snipped>...</bullets> </turn>
So my location and heading is right for both ticks, enemy location on the latter tick is right, but wrong for the previous tick. The e.getDistance() is in the same ballpark, but the bearing has to be way off to have projected a point 24 distance away from the next tick. (Didn't actually do the math...)
This definitely only happens at high speeds / loads. This is a modified Jen and I couldn't get it to happen vs Raiko until I added a few hundred thousand sqrt operations into the run loop. It happened against Phoenix without any such hacks. Both only when minimized.
First page |
Previous page |
Next page |
Last page |