Talk:Darkcanuck/RRServer

From Robowiki
Jump to navigation Jump to search

Initial Discussion

Fire away...

Just a suggestion for an additional check. I have never seen score a bot more than 8000 points, so this could be checked too. When examining the results that messed up the original roborumble rating beyond repair, I saw results of 20000 against 16000 (Thats what you get when running OneOnOne with MELEE=YES). For the time being I let my client running (unattended) for ABC's server, as I don't really have the time for bughunting. Your effort however seems promising. Good luck. -- GrubbmGait

  • Thanks! That's a good check, will be combining that with the survival >=35 (also your suggestion I think) once I rearrange the error handling and failure output to the client. Then I'll look into ELO... --Darkcanuck
  • Your checks have both been implemented. -- Darkcanuck

Looking very nice! I have a couple questions and thoughts I thought I'll mention. So what is this "Ideal" column in the results mean? One thought I had about ratings, is perhaps it would be best to make the APS fill missing pairings with Glicko-based estimates? I'm thinking that would give the best long term stability/accuracy once pairings are complete while having something a more meaningful before pairings are complete before the pairings are complete. --Rednaxela 01:18, 26 September 2008 (UTC)

Thanks! I've just posted a bit more about ratings here. The "Ideal" column is my attempt to reverse calculate a rating based on a bot's APS. I just inverted the Glicko formula for "E" (expected probability) to yield a rating given if given E (i.e. APS) and a competitor's rating and RD. For the latter two I used the defaults (1500 and 350) so theoretically if the APS represents the score vs an average bot (and there's a uniform distribution?) then the rating might converge to the "ideal" value. But I have no idea if it works, just wanted to see how close it might be. I'm not sure you could fill in the pairings using Glicko + APS -- the reason systems like Glicko exist is to get around the problem of incomplete pairings, so the Glicko rating should be enough in itself. If it's accurate, that is -- we'll see once the ratings catch up to the pairings already submitted... -- Darkcanuck 03:39, 26 September 2008 (UTC)

Ahh, I see. Thanks for the explanation. If the Glicko rating doesn't converge very very close to the "Ideal" then I'd say it alone might not be the best fit alone for Robocode due to how complete pairings are not hard to get. The reason I suggest using APS and filling missing pairings with Glicko-based percent estimates, is because my proposed method will be guaranteed to always converge to an exact APS ranking order when pairings are complete, and would quite surely be at least slightly better than APS when pairings are not complete. Perhaps I'm more picky than most, but I'd consider a hybrid necessary if "Glicko" doesn't in practice converge to "Ideal" to within an accuracy that preserves exact rankings with APS (which I think is very plain and simple the most fair when there are complete pairings). I suppose we'll see how accurately Glicko converges :) --Rednaxela 04:25, 26 September 2008 (UTC)

Be careful about the "ideal" convergence concept! Keep in mind that I made this value up and it doesn't really have a statistical basis of any sort. I was just curious what a naive reversal with a single data point might produce, in order to get an idea of what neighbourhood DrussGT's rating might be in, for example. I also wanted to get a sense whether I had programmed the formulas correctly. I wonder though, if we're abusing these rating systems by using %score instead of absolute win/lose values (1/0)? Would the Glicko rating converge more rapidly to match the APS scale if I had chosen win/loss? I'm very curious, but no so much as to interrupt the current rebuild, which may take longer than I thought. -- Darkcanuck 04:54, 26 September 2008 (UTC)
Well, I'm not talking about the convergence to that "Ideal" column. I'm talking about convergence of the relative rankings as opposed to specific rating numbers. If the rankings, don't converge to exactly the same order as APS, then I think there's issue enough to justify a hybrid that uses APS, with ELO or Glicko to estimate missing pairings. --Rednaxela 05:10, 26 September 2008 (UTC)
Gotcha. I suppose you could keep track of the rating (Elo or Glicko) and just use it to calculate expected scores for missing pairings. Then generate an estimated APS for full pairings. We'll have to see how well the ratings stabilize. I'm thinking I should have used Glicko-2 instead, since it includes a volatility rating to account for erratic (read problem bot) performance. -- Darkcanuck 06:22, 26 September 2008 (UTC)

Started sending the results to your server, as long as you relay them to ABC's server. What is the delay btw? --GrubbmGait 10:08, 26 September 2008 (UTC)

Thanks for joining in! I have no plans to stop relaying results and have been doing so for almost a week now. If by "delay" you mean occasional slow connections, it's due to the scoring update and I've posted it on the known issues page. I have this process cranked up at the moment while I try to get the ratings to catch up, but it will get faster soon. :) -- Darkcanuck 15:25, 26 September 2008 (UTC)

Great job with his server, you can always get the ranking/battles_* files from my server and sumbit them all into yours. I'm also experimenting with mySql atm. My SQL skills are a little rusty but it's all coming back pretty fast :).

I also have a few doubts about the new ranting method. The first one is: why? From what I understand Glicko is an ELO extension for rankings where the match frequency is not uniform between participants, which is not the rumble's case? As an experiment it's very cool, but for me the "old" ELO method is time tested and proven to work great, and should be the default sorting method for the ranking table. --ABC 11:23, 26 September 2008 (UTC)

I also have some doubts about if Glicko will actually give better or much different results than ELO, however I'm not sure ELO is really the best default ranking system when full pairings are easiest to get. I suppose we'll see once your server gets to full pairings, but I'm strongly suspecting there will be some ranking deviations from the APS ranking, which I think is hard to argue is in any way biased. --Rednaxela 13:26, 26 September 2008 (UTC)
I have doubts as well, but I wouldn't have known until I tried it. My major objection against Elo is the lack of a clear, published implementation. It was easier to implement Glicko than to sort through the RR server code. If someone can clarify this for me, sure I'll try it out. Why not? -- Darkcanuck 15:25, 26 September 2008 (UTC)

Bravo

I just want to leave a note saying you're awesome. :) It's really nice having someone put effort into improving the rumble itself. Good work! --Simonton 03:27, 11 October 2008 (UTC)

Oh, and FNL, if you're reading this, that goes double for you :). --Simonton 03:30, 11 October 2008 (UTC)

Style

Do you think you or I could restyle the page, some basic css could go a long way to making the page look more modern and less of an eyesore. An example of my work is here, though it wouldn't look like my page there but it will be clean (and it will validate). Currently its not even setup as a webpage. Which means it will render in quarks mode by all browsers, which is a very slow and cpu intensive rendering mode.

In fact there is alot you can do to both reduce html elements and increase rendering speed. Such as changing the <td><b> combo into just <th> tags, because thats what they are for <td> = table data, <th> = table header, with some css you can justify thier alignment, it wouldn't require much css, as such alot of css is actually undesirable in a simple page such as this, but css is perferred over tags because it is actually faster in most cases (very old or poorly designed browsers being the exception).

Chase-san 08:02, 14 October 2008 (UTC)

  • I very strongly agree! It's on the roadmap, but I've focused on the data side first. The current "pages" were based on a view-source from the old server. A little css and valid xhtml would go a long way. I also want to switch to a template system (maybe Smarty or Zend?) for easier reading and better reuse -- having html mixed in with php makes for some very ugly code. If you want to style some static content and send it to me, that would be great! --Darkcanuck 15:04, 14 October 2008 (UTC)
xhtml is very nice, while most browsers support true xhtml (except IE and Konqueror), the ones that do not, control a large enough majority where it would have to be described as text/html anyway. This mitigates the real purpose of an xhtml page but is nice to have the framework in place for when they catch up (all the work that would have to be done is switching the content type). I think using Smarty or Zend is overkill unless you plan on extending the system further and I only suggest them only if you plan on doing something like roborumble.org. They are template engines, meaning you would have to make templates for them, which jsut adds alot of extra overhead on something simple like this. Remember KIS, keep it simple. --Chase-san 21:36, 14 October 2008 (UTC)
If you really want a really nice super-quick super-simple "template engine" I suggest you consider this. Instead of bothering with special "template" languages, you write your templates just in PHP, and all the "template engine" does is set up variables making really clean shorthand like <?=$title;?> all that's needed to put some variable in the template. I once tried it when hacking around and found it to be a really nice KIS approach to "template engines". Also the author put the code there in public domain, so there are no issues using it in here as we see fit. --Rednaxela 03:47, 15 October 2008 (UTC)
I at one point designed my own KIS template system, it was simular to others except that the content to replace was in {}, for example {title} and then for other parts I did things like <table>{row_start}<tr><td>{row_num}</td><td>{row_data}</td></tr>{row_end}</table>. All this was kept in a seperate file and required parsing, but otherwise was fairly simple that it was a template engine but also it only used half a dozen commands and you used the functions to fill in the data. I will see if I can locate it or remake it if you like the sound of using a template but still want to keep it very simple. --Chase-san 22:41, 15 October 2008 (UTC)
Thanks for the suggestions guys, but I'm sticking to my original plan (Smarty). If the template engine ever becomes the bottleneck, then I'll look into something custom. --Darkcanuck 02:13, 16 October 2008 (UTC)
Okay, cool. I would like to work on a template for the actual score page then, I am great at css and making it cross-compatible with other browsers (namely IE, Firefox, and Safari (I use Opera, so obviously it will work for that too)). Unlike making robots, web pages are not very time consuming. Do you have any kind of messenger we could talk (I have or can get any of them) --Chase-san 04:08, 16 October 2008 (UTC)
Excellent! Don't use messenging much, and I'm currently travelling at the moment so email is better: jerome-at-darkcanuck-net --Darkcanuck 23:43, 16 October 2008 (UTC)


Team Rankings

Is it an idea to get the battles for teams from Pulsars server? I think they have no weird results, and your ranking will at least have a teamranking then. --GrubbmGait 17:57, 24 October 2008 (UTC)

Good idea! I'll grab the battle file, but need to figure out how to exclude older team versions to keep the server load down. --Darkcanuck 01:58, 25 October 2008 (UTC)

Table Sorting

Very nice things things lately! I do have a couple little gripes though. One thing, is I think it would be more natural if first click does 'highest-first' unlike how TableSorter seems to operate by default. Secondly... ugh... it's so damn slow to sort. Even on my fairly modern system there's a very ugly delay when sorting the table (a 20 year old machine could sort the data faster with static code probably... not everyone uses Google Chrome or an experimental FF build) and I imagine this would become a very annoying delay on anything older. Not only is the JS sorting slower than server-side but there's no indication of it processing/loading which irks me a little. Perhaps if the JS sorting is stuck there should be a little line or two of code to makebo a 'loading...' indicator of some sort? In any case, great work lately! --Rednaxela 21:39, 26 October 2008 (UTC)

The problem with javascript when sorting big tables is not the sorting in itself but the big number of DOM document chand hges when you generate the resulting table HTML. I'm currently developing a small javascript application at work that sorts a table of around 500 entries pretty much instantaneously. It only shows the top 5 entries as a table (similar to DC targeting, curiously :)), if I generate the 500 rows it becomes very slow. --ABC 23:14, 26 October 2008 (UTC)

After some reading, I found that apparently TableSorter's slowest part, is how it READS the data from the DOM every time you sort. Perhaps a more efficent method would be to send the data in both HTML form and JSON form, and let the script change the order of the rows in the DOM based on the data efficently parsed from the JSON and stored in the JS memory. I think that model would have the fewest DOM operations and thus be the most efficent way to do client-side sorting. On a related but diverging note... once at that point, it might not be that much more work to do 'live' score updates... (which would also reduce bandwidth use in the face of mad-refereshers). I may be tempted to try and code such a fancy efficent-sorting live-updating score view some time... --Rednaxela 00:19, 27 October 2008 (UTC)

Well, it's faster than re-requesting the page, which the old sort did. :) But if you find a way to speed it up, I'm all ears -- javascript is pretty new to me. I don't like the default sort order either, but there didn't seem to be an option to start with a descending sort. The Glicko columns are also a little weird due to the RD value in brackets. I'm not sure I follow the bit about "live updates" though, the current pages are as live as you can get. Scores are updated every time a new result is uploaded. --Darkcanuck 05:47, 27 October 2008 (UTC)

Actually, I'm finding it very distinctly slower than re-requesting the whole page (of course my campus internet here is pretty damn fast). Well, I think it certainly be sped up by methods like I said above with sending the data in JSON form and keeping in JS memory, though it would likely involve using our own code instead of TableSorter (or mangling TableSorter considerably beyond recognition). And what I mean by "live updates", would be using "AJAX" stuff to every minute or so ask the server if there have been any more recent updates, the server sends any in JSON form and the results page gets updated without refreshing. --Rednaxela 12:35, 27 October 2008 (UTC)

Contributors

Just another small idea, can you distinguish the contributors per month? Long time ago, late 2004 I think, we had a sortof ranking of contributors when rebuilding the rankins after a servercrash. (He, sounds familiar . . ) This way 'new' contributors see their names without the need to scroll way down. Also: every ranking is a competition ;) --GrubbmGait 19:00, 28 October 2008 (UTC)

Are you saying you don't like my score of 410,000+? ;) (melee is the key to high numbers, btw) Good idea to split the numbers out in more detail. I guess I could add some more columns to the users table to make some rolling counts. What interval would be best: once per day to keep a 30-day window, or start fresh every month to make a new competition? --Darkcanuck 05:40, 29 October 2008 (UTC)
Well I personally think "once per day to keep a 30-day window" would be best for being a more meaningful and current reflection of things, but starting fresh each month would be best if we want to have something like a 'monthly rumble contributor award'. Of the tradeoff, I'm leaning to the former myself. Of course, if we really wanted we could just track both :) --Rednaxela 14:53, 29 October 2008 (UTC)

Ok, we now have current month and last-30-days upload rankings, split by game type. I've tried to scale the melee numbers to match the actual number of battles run (45 pairings uploaded per battle?). Hopefully someone will start to submit team battles (can't get my client to work). --Darkcanuck 23:51, 30 November 2008 (UTC)

Partecipant list

Can you please create a mirror of the official participant's list on your server (updated automatically)? That's good if the official page is off-line, like now ^-^ --Lestofante 22:05, 1 Dic 2008 (UTC)

Try this: http://darkcanuck.net/rumble/particip1v1.txt . I just uploaded my copy to the server and added the 'pre' tags the rumble client is looking for. Once the old wiki comes back I'll try mirroring all of the participant lists -- shouldn't be difficult, just a daily 'wget'... --Darkcanuck 03:35, 2 December 2008 (UTC)
Thank, now my client work. Here the modify: PARTICIPANTSURL=http://darkcanuck.net/rumble/particip1v1.txt. For the mirroring system just don't use only a wget but use a little script that control the integrity of the list.--lestofante 09:37, 2 December 2008 (UTC)

Survival

One thought is, now that removing the Glicko-1 column has cleared up a little space... maybe those survival percents that are in the details pages could be included? I think it would be nice to be able to easily see what bots are strong survivalists ;-) --Rednaxela 23:42, 2 December 2008 (UTC)

Too easy! :) --Darkcanuck 04:53, 3 December 2008 (UTC)
Nice. Now just to wait for all the bots to return so I can see how good 'RougeDC survival' really ranks in that... :) --Rednaxela 05:05, 3 December 2008 (UTC)
It could be a long wait -- reactivation is just as slow as removal. But at least clients won't be fighting over the two. --Darkcanuck 05:13, 3 December 2008 (UTC)
Aye, but at least based on the rate at which my client is currently uploading bots of which some need to be reactivated, I think there's a good chance it may be back to normal in less than 12 hours from now. --Rednaxela 05:29, 3 December 2008 (UTC)

"Suspicious Battle List"

One thought I had is that bad 0 scores could be filtered by taking a look at the expected score, and discarding 0 results where it they seem unreasonable. Of course, an alternative to automatic rejection, would be making a "suspicious battle list" page that could be watched for manually initialting removals. I would imagine it would take no more than a single SQL statement of moderate complexity to list suspicious uploads. --Rednaxela 06:25, 29 December 2008 (UTC)

Neat idea. Although bots which throw the occasional exception may get a lower than expected score once in a while. Rather than run a query against the battles table, the server could flag battles as they're submitted if the score deviates too far from the expected value. What do you think a good range would be, considering some bots have very high PBI's? --Darkcanuck 06:48, 29 December 2008 (UTC)

Well I think running a query against the battles table is necessary due to the number of bad results that are already in the server, which I'd consider quite important to fix and manually searching for all of them would be time intensive. As far as what kind of deviation? Well because of such high PBI cases I'd say something roughly like the following would be good. Flag them if: 1) The battle deviates from the Glicko-2 expected result by more than 50, or 2) The battle deviates from any results submitted by *other* clients by more than 30, or 3) The score is exactly 0 when the expected score is anything greater than 20
Of course, I strongly believe we can't get a really strong idea of exactly what thresholds are good until we to do some queries on the battles database to determine what level of sensetivity is most correct. --Rednaxela 07:07, 29 December 2008 (UTC)


Source code

Can I've your server source, please? I've write PHP and MySQL for over 3 years now and I've palnned to create new Thai RoboRumble for my country! Hope you'll give it to me. You can email me at email found on my user page. » Nat | Talk » 09:20, 10 February 2009 (UTC)

Rumble ideas

Hi! I'm very thankful to you for doing the new engine. I was thinking about brand new femto and haiku rumbles. What do you think about it? In my opinion it'd be fantastic if there were these kind of categories. They seem to be really cool and exiting, but unfortunately, there isn't any ranking or challenge for them. Femto can't be hard to implement, maybe haiku is a harder task. I imagined new categories for them with new participants list for them, but I can imagine that the actual bots can have this kind of rank, but then it would lose its importance. --HUNRobar 17:55, 14 February 2009 (UTC)

In femto battle, we really need to modify RR@H client. For Haiku, I think not. It require human to detect how many lines are there. But, wait a minute, I'm now creating my new Rumble Server which support many old rumble ideas and these rumble! That why I want his source code above. If you want to test your haiku bot or femto bot, you can see old ranking and bot in robocode little league by Kawigi. » Nat | Talk » 01:48, 15 February 2009 (UTC)

Valid versions

By the way Darkcanuck, just to let you know:

  • I'm quite sure 1.6.1.4 (NOT plain 1.6.1) is at least as rumble-stable as 1.6.0 is, and is better because it fixed how ITERATE was broken.
  • Also, I'm pretty sure EVERY single version from 1.6.2 to 1.7.1 Beta 2 have been bad for rumble.
  • 1.7.1 Final look like they're probably good for rumble except for:
  1. http://sourceforge.net/tracker/?func=detail&aid=2727675&group_id=37202&atid=419486
  2. http://sourceforge.net/tracker/?func=detail&aid=2627698&group_id=37202&atid=419486

--Rednaxela 07:34, 3 April 2009 (UTC)

  • Agree. I really like 1.7.1, even in alpha version, compare to 1.7.0.2, which have a ton of bugs =D I'm figuring out what behind SandboxDT and sure that Fnl is fixing another bug so expected 1.7.1.1 with better rumble :-) » Nat | Talk » 16:07, 3 April 2009 (UTC)

Ok, I can add 1.6.1.4 to the list -- but it won't matter much since that client won't report it's version number either. Nice summary though. (and you know I filed that 1.7.1 melee bug right?) Anyone interested in patching the 1.5.4/1.6.0/1.6.1.4 rumble jar(s) with the version check from 1.6.2? --Darkcanuck 15:40, 3 April 2009 (UTC)

  • Either user won't download patched version. I'll try to do. Just getting head spinning around checking out from robocode svn :-) » Nat | Talk » 16:07, 3 April 2009 (UTC)

I see you patched your client already. Just few suggestion, you can (mostly) detect from user suffix. Most user suffix their name with version (except deewiant) so you can check with that. » Nat | Talk » 19:11, 5 April 2009 (UTC)

Yes, but I like guarantees that the right version is being used. :) I've patched both 1.5.4 and 1.6.1.4 to report the client version and I'll post the new jars later today. Once rumble users have switched, then I can turn off the workaround for older clients. --Darkcanuck 22:08, 5 April 2009 (UTC)

I've been using 1.6.0 for the rumble, for what I understand I should install 1.6.1.4 (actually the version I develop with) and replace with the patched jar? Has it been tested, even a little bit, to ensure no side effects, or should I set upload to not for a while? --zyx 03:52, 6 April 2009 (UTC)

I've tested both jars on my system and they seem to be fine. If you want to stick with 1.6.0 I can patch it tomorrow -- I got lazy and only did "ol' reliable" (1.5.4) and the latest stable version. Right now I'm using 1.6.1.4 myself, although I can't get that one to work on my Mac. --Darkcanuck 04:20, 6 April 2009 (UTC)

I tested 1.6.1.4 a fair bit and for a period of time it was what I was using for rumble. And also, like I note above, that version fixes the ITERATE option which has been broken for a long time (it still ran with ITERATE=YES in older versions, but it didn't choose the best bots properly after the first iteration). --Rednaxela 04:17, 6 April 2009 (UTC)

No no, I don't want to stick to 1.6.0, I used 1.6.1.4 as my first rumble client, then read that official versions were 1.5.4 and 1.6.0 so I downgraded to it, so actually 1.6.1.4 is what I'd like to use. When I saw Rednaxela's post above I already decided to switch, I don't use ITERATE, but I still prefer the newest stable version, and since is the version I develop in, even more. My question was related to the patched jar's, sometimes one change affects more than one would like it to, so I asked if you tested it, relatively enough :-p. I will run the patched 1.6.1.4 later tonight, probably with UPLOADS set to NOT just in case, and tomorrow let it upload if ok, or report any weird behavior if I see one. Good job anyways. --zyx 05:20, 6 April 2009 (UTC)

FYI, SVN revision r2352 is the update where it is added. (I think you knew already, Darkcanuck) Actually, I saw only few lines of changes :-) BTW, it's engine for 1.6.2 (AKA melee bugs version) not for old engine. There is a lot change to 1.6.2. Just shame of myself, as I said above to create a patch, but I not even start yet. I don't think you need to patch 1.6.0.1, as everybody but Darkcanuck and GrubbmGait use 1.6.1.4 (at least after this night) AFAIK, no bot that can run on 1.5.4 or 1.6.0.1 can't run on 1.6.1.4, or any? If everybody use 1.6.1.4, I shall release a bot with underscore in version again =D » Nat | Talk » 07:48, 6 April 2009 (UTC)

Zyx, why don't you use ITERATE? David Alves said somewhere that ITERATE is twice faster than using shell script. » Nat | Talk » 07:48, 6 April 2009 (UTC)

Probably because of that, I don't like my processors temperature when ITERATE is on. I have a shell scripts that sleeps after every iteration and it can be set how many roborumble iterations to run per one meleerumble iteration. And I know that ITERATE is much faster because the initializing version check takes quite some time. I have a modified version of RoboRumble, that basically does the same thing but it doesn't upload results(stores to files) and has a Thread.sleep(X), that I use to test new versions of my bots, and that one is faster and I can still sleep after iterations. Although adding the Sleep into the official version would be really simple, I would still be missing my roborumble/meleerumble relation, also Darkcanuck has a bit of fault, since the server is faster it's harder for the processor to cool down :-S. --zyx 08:28, 6 April 2009 (UTC)

I went ahead and patched 1.6.0 anyway -- but this one I haven't tested. The other two have been tested for both one-on-one and melee, and I've been using 1.6.1.4 for two days now. If you find the server too fast, I can increase the upload throttling :) (right now there's a one second delay between uploads) --Darkcanuck 15:14, 6 April 2009 (UTC)

Rednaxela, bot issues are fixed, please verify. Unfortunatly, new bugs discovered. [1] :-( » Nat | Talk » 02:16, 8 April 2009 (UTC)

I think it may be awhile before a stable 1.7.x version is ready. There was never a stable 1.6.2 and 1.7 adds more systemic changes so there will be more bug hunting to come! I might add a "test" mode on the server so basic rumble checks can be done -- let me know if you have suggestions. --Darkcanuck 02:24, 8 April 2009 (UTC)
Maybe you don't know that 1.7.1 has nearly no bug left. It inherit a lot of bug from 1.7.0 that don't reported on SF, and a lot of new bug, too. But, I have hunted more than 50 bugs already (from alpha version)
The "test" mode is good ideas, but will it overload your server? I suggest, if your mysql is fast enough, try adding field stable. Stable result query WHERE `stable` = 1 and "test" result query all of them. Easy? » Nat | Talk » 03:41, 8 April 2009 (UTC)
Even when all the reported bugs are fixed, we will need to spend some time running it to make sure the results are valid. The "test" mode I was considering wouldn't actually store anything to the database, just do the basic data validation checks before throwing away the results. This way you could run a new client and monitor the results. There's already a status flag in the battle results table which could do what you suggested, but I don't know that we need to store the test results. A fairly simple improvement on this plan would be to calculate the difference between the real rumble results and those from the test client, then send this data back to the client. --Darkcanuck 04:18, 8 April 2009 (UTC)
Team Rumble result are invalid right now, that's why those melee bot going into Team Rumble result. But I think RoboRumble and MeleeRumble result are valid now. I'll test by set UPLOAD = NOT in latest 1.7, put result to 1.6.1.4 and let it upload. :-) » Nat | Talk » 04:30, 8 April 2009 (UTC)
Ergh! Sorry, please consider delete all result from Nat_1711 :-( 1.6.2 and up use Survival score but older use place count. I'm very sorry. » Nat | Talk » 04:48, 8 April 2009 (UTC)
Maybe next time don't change your client's version number? The check is there for a reason... --Darkcanuck 06:27, 8 April 2009 (UTC)
I'm not playing. All result is uploaded under new username (Nat_1711 vs. Nat_1614 or Nat). I think you cen delete with one SQL query, aren't you?
But the result from from it is very close to original score. I can't spot any difference except survival. I've a tousand of 1.7.2alpha result save im my machine wait for 1.6.1.4 to upload it :-) Just look at your code, using survival score aren't metter in one-one-one/team since it automatically calculate percent score.
I hope you plan for newer version soon. This version work twice faster than 1.6.1.4 on my machine. But load with pile of java exceptions, too. » Nat | Talk » 06:49, 8 April 2009 (UTC)
I appreciate your taking the time to test 1.7.2, but please don't upload results from 1.7.2 using a 1.6.1.4 client! If there are problems with the results, how can we separate them easily? Removing bad results from the rumble is more complicated than a single SQL query and unfortunately I haven't automated it yet: 1 - the bad result has to be flagged/deleted, 2 - pairing scores need to be recalculated, 3 - ELO/Glicko/APS rankings need to be updated at least once to smooth out the bad data (only APS can be recovered cleanly). When the open issues with 1.7.x are fixed and a new release comes out then we can look into allowing the new version. But for now, please stick to the official versions when uploading. There are over 5 million battles stored on the server, I don't want to search through all of them to find a handful of bad ones! ;) --Darkcanuck 07:15, 8 April 2009 (UTC)

I noticed the message about patching roborumble.jar, so I did, but then I get the following when uploading:

OK. Client version null is not supported by this server! Please use one of these: 1.5.4, 1.6.0, 1.6.1.4

I tried patched versions of both 1.5.4 and 1.6.1.4, but I got the same message each time. Beats me what's up with that; for now, I reverted back to the unpatched roborumble.jar (under 1.6.1.4, for what it's worth). --Deewiant 15:22, 9 April 2009 (UTC)

Sounds like the patched roborumble.jar is working but the game engine isn't returning a version number (just an empty string). Can you try a clean install (just copy over your bot jars and the files under roborumble/)? I think the engine pulls its version number from the versions.txt file, so if it's missing or has been updated then this could happen. --Darkcanuck 07:59, 10 April 2009 (UTC)
Sorry, I should have been more clear: that's exactly what I did for both 1.5.4 and 1.6.1.4 when I first ran into the problem: I grabbed the installer from SourceForge, copied over robots and roborumble/*.txt, overwrote roborumble.jar with the patched one and ran meleerumble.sh. And then I got the error again. --Deewiant 10:53, 10 April 2009 (UTC)
Thanks for the info! I think I just found the bug: in the pre-1.6.2 versions (which is where the patch comes from) there are separate methods for normal battles and melee battles. Looks like I only patched the normal one but missed melee. Expect a new set of patched versions shortly! --Darkcanuck 21:27, 10 April 2009 (UTC)
Just tested the new 1.6.1.4 patch and this problem has been fixed for melee. 1.5.4 and 1.6.0 also have been fixed. You can download the new version using the same link, although you'll probably have to clear your browser cache to get the latest version. --Darkcanuck 22:27, 10 April 2009 (UTC)
Alright, 1.5.4 works for me now, cheers. --Deewiant 10:46, 11 April 2009 (UTC)

Darkcanuck, could you please take a look at robocode 1.7.1.1 released this week? Please test it and report any bug you found, or, in another word, take a decision that is it stable for RoboRumble or not. » Nat | Talk » 06:30, 13 April 2009 (UTC)

I'm away this week but when I get back I'm planning to work on the server a bit more. Once that's done I'll take a look at the new version. And thanks to everyone who's using the patched rumble client, I think we're almost ready to disable uploads from anonymous clients! --Darkcanuck 18:32, 15 April 2009 (UTC)

There is a bug, at least in the patched 1.6.1.4 version. If you have some battle results stored and run the client with EXECUTE=NOT, then I get this message and the results are thrown away.

OK. Client version null is not supported by this server! Please use one of these: 1.5.4, 1.6.0, 1.6.1.4

I guess it pulls the version number somewhere after executing battles starts or something like that. I guess the jar should be fixed, but anyway I think the server should reply FAIL instead of OK so the results are kept in the client. --zyx 08:20, 16 April 2009 (UTC)

That's a quirk of how the version number is being pulled by Roborumble -- it's a bit odd, but I just copied how it was done in 1.7.1. Not sure how easy this is to fix but you can file it as a bug on sourceforge. On the server side, I always send an "OK" to invalid clients to prevent them from holding on to possibly bad results. For example, someone may run battles with an invalid version, see the error messages and then install a valid one on top -- if the old results are still there, they would later get uploaded with the correct version number and then corrupt the rankings... --Darkcanuck 18:55, 18 April 2009 (UTC)

Rating

If in melee, both Bot A and Bot B had "0 survival", Bot A AND Bot B get "0% survival" against each other. is it 0 because 0 survival = 0% survival against the other bot, regardless of what the other bot got? or is it because of a 0 / 0 thing? --Starrynte 00:40, 4 April 2009 (UTC)

In a melee battle, if two bots have 0 survival then when the server tries to calculate the survival % for that pairing it becomes 0 / (0+0) for bot A, same for B. The divide by zero protection simply assigns 0 scores to both, although I suppose technically they should each get 50%. --Darkcanuck 05:08, 4 April 2009 (UTC)

Team Rumble

What the heck going in team rumble? What does melee bot from? » Nat | Talk » 20:26, 5 April 2009 (UTC)

Ugh, thanks for pointing this out! That's exactly the sort of thing the patched clients will prevent: they also report the MELEE and TEAMS settings from the properties file. Now if only we can get everyone to adopt them (only 6 uploaders active this month, shouldn't be too hard?) ... --Darkcanuck 03:16, 6 April 2009 (UTC)

  • (off-topic) Which rumble do you want contributor most? Now I run roborumble and meleerumble, with UPLOAD = DOWNLOAD = NOT at night (I usually close my internet at night, but I left my machine run so it don't use SERVER, but GENERAL) and when I awake, I change UPLOAD = DOWNLOAD = YES again. Should I switch to team rumble instead of roborumble? » Nat | Talk » 09:15, 6 April 2009 (UTC)

Melee Rumble

Is something wrong if each time when I run meleerumble it spews out tons of html code? --Starrynte 18:04, 9 April 2009 (UTC)

Sounds odd... can you send me a sample? (jerome at darkcanuck dot net) I don't often run a melee client but I don't remember seeing extra output. --Darkcanuck 07:53, 10 April 2009 (UTC)

Pairing

Why DrussGT 1.3.6 has 699 pairings while only 699 bots in rumble? It should have only 698 so far (it can't paired with itself) » Nat | Talk » 07:37, 15 April 2009 (UTC)

  • OK, it's going down to 697 now, and loose PL score. » Nat | Talk » 08:47, 15 April 2009 (UTC)
Data in the ranking tables only updates when that bot gets a new battle result. So if there are new bots added or retired from the rumble, it may take a little while for all the existing competitors to fight one battle each and get updated. If you want to use data from the rankings table, I'd suggest waiting until that bot has at least 2000 battles and there has been no changes to the participants list for at least one day. --Darkcanuck 18:03, 15 April 2009 (UTC)

Comparison between robot

I like the new feature for comparison with old version. Can you put at the end of comparison a "total" row and maybe add sorting script like main ranking page? I hope to have a look at server's code in this day. --lestofante 11:54, 28 April 2009 (UTC)

Yeah, I love it! It was the one thing I really missed from the old server, and displaying recent versions with links is very nice. I second the "total row" idea, and listing "best" version among the links might be nice too, but I could live without either of those. I know I'm late to the party, but your RR server is really sweet, major thanks from me for all your hard work. --Voidious 13:50, 28 April 2009 (UTC)

Really good job, I used to save the page of my old bots and compare them in Excel. For the new features proposed, I like the sorting idea the best. Great work man. --zyx 14:57, 28 April 2009 (UTC)

Thanks! Sorting is already enabled, it works just like the other tables -- you may need to reload the page or clear your browser cache to update the javascript? Will a totals row really help more than the average % score and survival at the top of the page? --Darkcanuck 03:06, 29 April 2009 (UTC)
The sorting indeed works fine for me, nice. The % score and survival are not limited to the bots they have commonly faced, that's why I'd still find myself calculating the total from the table. (Especially before the new one has all its pairings, yes I'm that impatient. =)) It's no biggie for me to copy/paste into Excel for that (as I've been doing for however many years), but just FYI that's why it could be different. Honestly I feel guilty even mentioning more bells and whistles, but since you asked... --Voidious 03:28, 29 April 2009 (UTC)
So really what you want is an APS & avg. survival for common pairings only, correct? I could put that in the summary table at the top... --Darkcanuck 03:36, 29 April 2009 (UTC)
Yep, that would be the same for my purposes. Thanks dude! --Voidious 03:41, 29 April 2009 (UTC)
Try it out... ;) --Darkcanuck 03:50, 29 April 2009 (UTC)
Wow, you're quick! Awesome, thanks again. =) --Voidious 3:57 26 April 2009 (UTC)


Very nice! I've been missing this! Now, I don't want to sound ungrateful or anything, but I had an idea that would help comparisons even further: if there was an equivalent of an ELO graph that runs off the expected score and the diff, so it's easy to (graphically) see where you lost or gained points on a version, against strong or weak bots. I'm not sure if you would be able to just feed the graph software different data, or if you need to go in and make a copy which you could adapt to pull different data, but I'm fairly sure it's a feature which would see good use! --Skilgannon 15:49, 30 April 2009 (UTC)

Now you want a graph?!? I think you'll have to call ABC out of retirement to look into this -- my javascript skills are quite limited... ;) --Darkcanuck 16:17, 30 April 2009 (UTC)

Probable bugs

I've now gotten 5 different crashes as I've been running the melee battles over the last 24 hours on 2 different systems. 2 of them were out of memory failures, 1 battle thread exception, and 2 illegal awt something or anothers. The common thing I noticed was robot Justin.Mallais 10.0 running in each group. That robot also takes my system down to a crawl while running. Anything else I can add to help you out? --User:Miked0801

No, this is not a server bugs. The out of memory failures mean that you set the java heap size too low, try -Xmx512M or -Xmx1G instead of default -Xmx256M and try again. The battle thread exception should be reported on sourceforge tracker. The awt thing is sometimes happen, but I don't think it make the client crash. If it does crash, report it on the tracker, too.

In case you don't know, sing your comment with --~~~~, it will automatically link to your user page with a nice timestamps. » Nat | Talk » 15:41, 29 April 2009 (UTC)

Hey, you might like to know (if you didn't notice) that the RR client now has the option to exclude certain bots or packages (set in the ...rumble.txt file). I haven't played with it much, but I have been tempted by SlowBots in the past =), and this sounds like a good situation for it. Not that this precludes the existence of bugs to be fixed in the RR client. But on that note, I think FlemmingLarsen may handle the RR client code, while Darkcanuck just setup a new server for it to point to. --Voidious 15:44, 29 April 2009 (UTC)
Yep, I only modified the RR client so that it sends the version to the server -- bugs should be logged at sourceforge for Fnl and Pavel to look into. The default melee memory setting is definitely way too low and really needs to be at least 512M as Nat pointed out (this has been fixed in later, unstable versions). My client runs fine with this amount, but I don't use that computer for anything else... 1.6.1.4 works fine although it has the unfortunate quirk of sending tons of output to the console, including occasional awt exceptions (which don't crash the client or seem to affect results). --Darkcanuck 16:27, 29 April 2009 (UTC)
Is there a better place on teh wiki for client bug discussions then? BTW, changed my memory settings and am testing now. --Miked0801 16:34, 29 April 2009 (UTC)
RoboRumble/Reported_Problems is the best place to start if it's not clear whether you're seeing a problem with the server, client or a specific bot. There are links to this area plus the sourceforge tracker too. --Darkcanuck 16:38, 29 April 2009 (UTC)
A quick update on the AWT thing. It happens 100% of the time when I start my Internet Explorer browser while running the game in the background. It also happened when Outlook sent me a meeting reminder. But on the server side, is there anyway to make sure that all unpaired robots can take priority when being selected for random battles? I've gone nearly 800 nano battles and have yet to get my last pairing (and have only hit 1 other robot once.) I've also noticed overall that many pairings have yet to occur on bots with over 5000 battles complete in general melee. This might be a random number/selection bug, or it might be bad luck. Either way, this should probably be nudged to help out the ranking integrity. Especially when I've battled other bots over 20 times. --Miked0801 23:34, 29 April 2009 (UTC)

I've been looking at pairings more closely recently and can tell you this much:

  • the server always reports missing pairings to the client on every upload (but only for the two bots in the pairing, limited to 50 pairs).
  • the client doesn't actually pay attention to this data until a bot reaches the BATTLESPERBOT number (usually 2000); until that point pairings seem to be chosen randomly. This is incorrect, I noted a quirk below that causes pairing completion to take longer than expected.
  • there's definitely something funny going on with melee and I'm not sure how the client puts together 10 bot matches. The server should be doing the same thing as for 1-on-1 but maybe the client doesn't use it?
  • I've seen (and others have reported) the client get stuck on one pairing, running it over and over...

--Darkcanuck 00:40, 30 April 2009 (UTC)

I just peeked at the client source and it looks like melee doesn't use "smart" battles, so it's completely random... --Darkcanuck 00:46, 30 April 2009 (UTC)

Ok, I did some further digging and managed to patch my 1.6.1.4 client to use priority battles in melee, so the missing pairings should start to sort themselves out soon. I'll make it available once I'm sure there's no bugs. But I also found that the way the client stores these pairings can lead to the same pairing being run over and over again -- especially in melee. In order to work around this problem, I've updated the server so that the missing pairings are sent to the client in a somewhat randomized fashion. This should help speed up the rate at which pairings are completed in all categories.

The Survival 0/0 = 0 bug is kinda annoying as well. Every now and then a melee battle accurs with one of the melee gods and none of the nanos survive. Seeing a 0% survival freaks me out. :) --Miked0801 23:52, 30 April 2009 (UTC)

How to Enter

How do I enter? I have a decent nano I want to try. --Awesomeness 21:55, 6 May 2009 (UTC)

  • include your bot on the participants page (RoboRumble -> Participants 1-v-1 or melee) and it will automatically get its battles on the running clients. See also RoboRumble -> Enter The Competition. Good luck! --GrubbmGait 22:19, 6 May 2009 (UTC)
Okay, I did... Do I just wait now? --Awesomeness 00:02, 7 May 2009 (UTC)
(edit conflict) Looks like you got it! Clients only refresh the participants list every 2hrs, so it may take at least that time for a new bot to show up in the rumble. 550 battles and climbing... with all the processing power running clients recently, Elite 1.0 should be at 2000 battles in no time! --Darkcanuck 00:41, 7 May 2009 (UTC)
Yep, your bot will get battles from those of us running a RoboRumble client. If you want to contribute battles yourself, check out the RoboRumble/Starting With RoboRumble instructions. There are a few things to note, though (that should be added to that page):
Once that's all set, just run roborumble.sh or roborumble.bat. Running a client is not required, but definitely appreciated if you're entering bots, and you won't have to wait as long to get a stable rating. =)
--Voidious 00:39, 7 May 2009 (UTC)

What's the problem?

Hey Darkcanuck, what's the problem? I can't find any problem with my client... » Nat | Talk » 08:53, 16 May 2009 (UTC)

Ok, ok. Just found that some roborumble result injected into resultsmelee.txt, weird... Fixed now. » Nat | Talk » 09:18, 16 May 2009 (UTC)

And, please unblock me soon, 4,613+ battles are waiting! (and my clients are running with UPLOAD=NOT) I've fixed all issue with my result file, anyone know how it happen? my resultsmelee.txt are injected by roborumble result and a LOT of whitespace. Actually I'm much appreciate you blocked me, or what will happen if my client reach the whitespace (around 1000 TABS characters)? » Nat | Talk » 13:39, 16 May 2009 (UTC)

Ok, but I think you should stick to the 1v1 rumble until we can figure this out -- I'm going to keep the meleerumble block active. Are you using the iterate feature? --Darkcanuck 20:10, 16 May 2009 (UTC)

Thank, but it would be better if you unblock melee and still block one-on-one because I run melee rumble as my main client (those battle are 90% melee and 10% one-on-one). Yes, I use the iterate feature (I hate initialize version check date) » Nat | Talk » 02:12, 17 May 2009 (UTC)
Well that's 4000+ possibly suspect battles... I don't feel like cleaning that up if there are more bad results. If you separate your melee and 1v1 installs and delete all saved results then we can turn this back on. --Darkcanuck 06:03, 17 May 2009 (UTC)
OK, my roborumble client is moved to my harddisk without any data transfer (blank result file) and my computer just crash around half an hour ago so I can say that no suspect battles left (lost 9000+ battles this time) But all are clean :-) » Nat | Talk » 06:14, 17 May 2009 (UTC)

Do you have separate installs (e.g other directories) for melee and one-on-one? If not, it is strongly advised not to run melee and one-on-one at the same time. --GrubbmGait 23:02, 16 May 2009 (UTC)

No, I'm not using the separated installations (not enough space in ramdrive). :-) » Nat | Talk » 02:12, 17 May 2009 (UTC)
Now I moved my one-on-one client back to hard disk, ramdrive now use for melee only. » Nat | Talk » 02:20, 17 May 2009 (UTC)
When running from the same installation, if in melee AND one-on-one the same bot is running a battle simultaneously, you get strange stuff. Same with running a client and developing at the same time in one installation. One installation should handle one thing at the time, so use separate installs although in your case this means a less convenient way. --GrubbmGait 09:26, 17 May 2009 (UTC)
Now it separated =) » Nat | Talk » 11:24, 17 May 2009 (UTC)

Well I can't really be sure, but the weird data and blanks injection sounds like a bug in the ram disk implementation to me. I haven't seen anything that could cause that in the rumble client's code nor have I heard of anyone having that issue before, but I feel that a small pointer related bug in a ram disk implementation can easily cause that behavior. --zyx 07:10, 17 May 2009 (UTC)

I don't think it is only ramdisk fault. I think it both Java and ramdisk fault. Anyway, I use separate installation (yet being synchronized) no. » Nat | Talk » 07:17, 17 May 2009 (UTC)

Have you unblocked my melee client? After cleaning the result file (accidentally actually), my client running again, now at iteration 15 and 2000+ battles waiting.

Confirmed:

  • My roborumble client is at C:\roborumble while meleerumble client is at R:\roborumble (ramdisk)
  • My melee result file was cleaned accidentally by a computer crash.

» Nat | Talk » 08:36, 17 May 2009 (UTC)

Iteration 34, 4500+ battles, please! (I think you are sleeping) » Nat | Talk » 11:24, 17 May 2009 (UTC)
Ok, done. :) --Darkcanuck 15:09, 17 May 2009 (UTC)
Thanks, sorry if my client upload results for Diamond 1.01/1.02 » Nat | Talk » 07:14, 18 May 2009 (UTC)


Hey DarakConuck,, My appologies... It seems my client uploaded some crap to the server... I don't know why.. I follow the directions above ( vr 1.6.1.4 / with robocode patch, and changed urls, ) and the install was meant for melee rumble only.. If you have any suggestions I'll use them, otherwise I have no prob waiting for the fool proof version . (It would be nice to one day run the roboRumble and enter a bot via the drop down menu in robocode)... Pls unblock me so I can view the rankings, and I will no longer attempt to run the melee rumble unless you have suggestions.. Thanks -Justin

No worries. I'll unblock your uploads shortly, but I fixed the bug that also blocked you from viewing the rankings -- that was unintended. I think your client was using the 1v1 participants url (that's what was posted above), but for melee you should be using http://robowiki.net/w/index.php?title=RoboRumble/Participants/Melee. This is in the meleerumble.txt file of course, which should be used by running meleerumble.bat/sh. Also important to have MELEEBOTS=10 in that file too. --Darkcanuck 22:32, 26 May 2009 (UTC)

You cannot post new threads to this discussion page because it has been protected from new threads, or you do not currently have permission to edit.

Contents

Thread titleRepliesLast modified
retiring ELO column615:51, 17 February 2012
FatalFlaw's uploads have suspicious APS for Tomcat004:58, 16 February 2012
kidmumu uploads317:16, 1 February 2012
Feature Request: average APS diff in bots compare615:55, 17 November 2011
Performance122:48, 13 November 2011

retiring ELO column

Now that everyone's ELO rating is subzero in General 1v1 =), is it maybe time to retire it altogether?

Voidious22:07, 14 February 2012

I'm all for it =) Although, doesn't the LRP depend on ELO data? Maybe shift that over to Glicko data instead? And if there was some way to make the LRP show the 'expected' option by default... that would make my day =)

Skilgannon08:55, 15 February 2012
 

I'd also support removal of ELO from the rumble, and replacing it with Glicko or Glicko2 in the places that use it (LRP).

Rednaxela16:50, 15 February 2012
 

I also agree

Jdev17:53, 15 February 2012
 

Yeah, ELO doesn't do much anymore. So agreed as well.

Chase-san19:51, 15 February 2012
 

Elo is working fine, even with negative scores, but keeping both Elo and Glicko-2 is redundant. So, removing one of them is fine by me.

MN20:00, 15 February 2012
 

Luckily we still have the music . . .

GrubbmGait15:51, 17 February 2012
 

FatalFlaw's uploads have suspicious APS for Tomcat

FatalFlaw's uploads have suspicious APS for Tomcat:

  • lxx.Tomcat 3.55 VS voidious.mini.Komarious 1.88
  • lxx.Tomcat 3.55 VS "baal.nano.N 1.42
  • lxx.Tomcat 3.55 VS gf.Centaur.Centaur 0.6.7

Darkcanuck, can you rollback all his uploads?

Jdev04:58, 16 February 2012

kidmumu uploads

Results from kidmumu uploads don´t come close to my uploads. Is there something wrong?

MN21:03, 29 January 2012

I haven't had a chance to check if this could affect mn.Combat, but my #1 guess would be that perhaps it's a java version issue (i.e. kidmumu is using Java 5 and Combat requires Java 6?).

Failing that, I'd have to think that kidmumu's client may be skipping turns.

Rednaxela19:21, 31 January 2012
 

[Combat vs Corners]

[Combat vs MyFirstRobot]

[Combat vs TrackFire]

Probably a Java version issue. I´ll downgrade to 1.5 in future versions. But didn´t check other bots scores.

MN16:56, 1 February 2012
 

I'm sure there are lots of bots that require Java 6, right? We might want to have Darkcanuck rollback all his uploads until we can get kidmumu onto Java 6.

Voidious17:16, 1 February 2012
 

Feature Request: average APS diff in bots compare

I find that until all pairings is done it's very useful to know current avarage difference in APS between two versions - after about 100 random battles this number says fairly exactly is newer version better, than older.
Darkcanuck, can you schedule to add row for columns "% Score", "% Survival" in section "+/- Difference" in bots compare page with avarage value of corresponds columns? I think, there're work for 1-2 hours maximum

Jdev10:12, 17 November 2011

I think this is already covered by the 'Common % Score (APS)' and 'Common % Survival', the lowest two lines in the top-table. At least I use it to check if my changes have a positive (or negative) result when the pairings are not complete yet.

GrubbmGait12:10, 17 November 2011
 

No. May be i wrote not clear.
I mean, that i want to know average difference in pairings between 2 versions. According to my tests, this number stabilizes mach faster, than APS. And more, Common % Score does not make sense, because while there only 1 battle in every pairing it's exactly equals to APS and in another case, there may be 10 battles against Walls and 1 battle against Druss.

Jdev12:32, 17 November 2011
 

As far as I know, when your new version has for example 100 pairings, you will see the average APS for that 100 pairings. AND for your older version you will also see the APS for that 100 pairings. And you are right, this indicates much more reliable what your final score will be (relative to your older version) than plain APS. The one who can really answer this question is Darkcanuck.

GrubbmGait12:46, 17 November 2011
 

Wow, if things is like you say, it's really what i want, thanks:)

Jdev12:53, 17 November 2011
 

The common %score is calculated just like APS, but only for pairings that the old and new versions have in common. That makes it easier to compare two versions when the new one is still missing many pairings, or in the case where the old bot may have pairings against a lot of retired bots (and may be missing scores vs newer bots). I think that's what you're looking for...

Darkcanuck15:51, 17 November 2011
 

Yes, thank you:)

Jdev15:55, 17 November 2011
 

Performance

Can you turn on .htaccess browser caching for results.

#Caching
ExpiresActive on
ExpiresByType image/gif "access plus 1 year"

Other performance enhancing things you can do are: Set specific size for the images, inline or via CSS. CSS would be easier. This would speed up the page loading and be also less annoying while all the requests are going through (having default sized images deforming the table before they load). Minify the HTML/CSS/JS (less to send).

Not doing/doable for known reasons: Serve identical files from the same url. (flag images)

Chase-san21:31, 11 October 2011

Just added the cache expiry directives -- let me know if that helps. The minification isn't necessary, my server already sends all files gzip'd, so the performance enhancement would be minimal at best.

Darkcanuck22:48, 13 November 2011