Queue full

Fragment of a discussion from Talk:LiteRumble
Jump to navigation Jump to search

Neat! Sounds like a lot of work. What's the setup like for multi-process battles? And is it the same mechanism locally vs clients across a network?

Voidious (talk)18:59, 31 May 2013

There is a "server" process and multiple "worker" processes. You start the server process by calling server.cmd. And start each worker process by calling worker.cmd. Each one runs in a separate JVM and needs its own Robocode installation. This way each process runs in a separate window and you can see what each is doing.

All communication to LiteRumble is done by the server process alone. Server and workers communicate through RMI.

Server process is currently using the same configuration file of the official client. Worker processes are currently 100% hardcoded, but server address/port and robocode home could be configured.

Server process downloads participants list, ratings, download JARs (in a separate "jar" thread-pool), calculate codesize, remove old participants and generate a local participants list. They run in a "download" single-thread pool (except jars). Participants list and battle count are sent to a "battle generation" thread-pool, which is single-threaded.

Worker processes connect to the server and requests a battle, which is generated on-the-fly by the server in the "battle generation" thread pool. Then the worker runs the battle and sends the result back to the server. Worker processes are mono-threaded (except for threads internal to Robocode).

Server receives the result, splits it in codesize classes and sends them to an "upload" thread pool, which is currently single-threaded.

In the "upload" thread pool, results are uploaded to LiteRumble. Battle count and priority battles are downloaded and sent to the "battle generation" thread pool. If workers flood the "upload" thread pool with results, upload requests are kept in a queue, and are uploaded one at a time.

In the battle generator, participants list, battle count and priority battles are grouped and used to generate a smart battle whenever a worker requests. All battle generation logic is kept in a single class, in a single thread, making it easy to customize.

The result is you see battles going non-stop on workers, and uploads going almost non-stop in the server process, one at a time. Makes a huge difference in melee.

MN (talk)21:00, 31 May 2013
 

Yes, it is the same mechanism locally and across a network.

The overhead from RMI could be avoided locally, but it is so low I didn´t bother.

MN (talk)21:36, 31 May 2013