There isn’t much that I can add with regard to 2020 that hasn’t been said to death already across social media and the various airwaves. It’s been challenging to say the least, to use a non-profane word. Its impact on Brick Mill Games was not fatal, but there were quite a few challenges that both Paul and I had to overcome.

We were, and still are, a “zero-revenue” company. The browser-based TOCS web client is still scattered in bits on the drawing board and in our minds. In fact, in spite of both of our desires to get it up and running in at least some rudimentary form in order to play test both the system and our developed game modules, there has been zero progress on it since March.

The timing of March is not randomly coincidental. With the outbreak of Covid-19, our play testing venues, the primary one being the Nor’Easter ASL convention, were cancelled… and with these cancellations came the removal of the “stakes in the sand” which we were using to drive client development. Our goals back in January / February 2020 were driven to building a web client good enough to play and/or debug in a 24/7 environment away from our day job responsibilities. Like Guderian’s panzers in ’41, we were charging forward regardless of our flanks in order to achieve the goal.

With the removal of the game convention deadline(s), our need for the new web client fell off dramatically. We still play tested the TOCS system, but it was done remotely using the flawed Vassal module and it was proven, once again, how far below our requirements the Vassal system is for TOCS.

While the play test proved lackluster with regard to game results, it did show us that we had other fish to fry, so to speak. Those flanks that we had figuratively ignored came back to bite us. We discovered errors in our orders of battle and in our data processes, errors exacerbated by Apple’s OS X Catalina upgrade in June. By the end of June, we were paying for all of our quick development sins: spreadsheet errors, OoB errors, software library errors, 3rd party software errors, etc. In quite a few cases, Paul’s system was not up-to-date with mine (to avoid the Catalina issues) and he was unable to create graphics that I could create on my system. We had moved so far, so fast in the prior years, each working in our own sandbox, that we were also facing interoperability issues. He couldn’t build what I could and vice versa.

Securing the Flanks

So while the web client work has sat idle these many months, the rest of the project has seen much needed improvements in the areas of data integrity and process standardization.

Scripts and Libraries

The mish-mash of assorted Ruby code which resembled an API has been refactored using the ‘gem’ standard. The Canvas collection of Ruby classes and modules has been transformed into a RubyGem called PixMill. While Canvas had relied on up to seven(!) 3rd-party Ruby gems to do its work, each simply a Ruby->C interface, PixMill now relies directly on the underlying C libraries that the 3rd-party gems interfaced with; it was the failure of one of these 3rd-party gems which had prompted the upgrade work.

Once PixMill was done, all libraries and scripts which relied on the Canvas API had to be refactored to use the PixMill API. While mostly similar, there were enough changes that it was not a zero-time effort. In fact, the gem-ification of Wargame-Graphics required a full 6000+ line refactoring effort across multiple files over the course of two weeks.


Once I was able to get PixMill up and running, Paul and I were able to meet face-to-face and get our systems in sync. After a full day of delayed OS and custom code upgrades, plus some remote screen sharing time on a later day, Paul was able to build TOCS game modules from spreadsheet to counters on his system in the same way I was able to.

Source Code Organization

TOCS used to sit within 3-4 separate source code repositories based on whether you were working on the Core rules or one of the Modules. As each new module seemed to require a change in the core repository, working in multiple branches in multiple code repositories was proving to be inefficient. In the early stages of development, Paul and I would frequently be working on different areas of the project without impacting each other’s work. This was no longer the case as the project matured and having multiple branches open in multiple repos was proving to be an administration headache, especially for a two-man project.

So, we merged all of the TOCS-related code into one encompassing code repository. This took about a week of work, but it has improved our efficiency quite a bit.


While our data-driven system has proven quite useful in generating useful data, we found a fundamental issue in our Orders of Battle spreadsheets. We didn’t properly handle the tracking of units’ formations, especially when units were part of a specific formation, but assigned to another. Our spreadsheet made numerous assumptions as to which units belonged to which formations and these assumptions were not relevant to some of our game modules.

Paul and I spent quite a few weeks discussing the formatting and nomenclature of both the spreadsheets and the generated game module and scenario files that would be generated from the spreadsheets. To support this new methodology, the spreadsheets of all three game modules had to be proofread and have new formation data added to it. This effort alone touched 2500+ rows of spreadsheet data. Additionally, processing code had to be created and rewritten to account for this new data. Finally, 2000+ lines of hand-generated XML code needed to be proofread and modified to align with the new Orders of Battle spreadsheets. (Paul deserves a medal for this work as it spanned weeks worth of multiple sprint efforts.)

QA & Review

The more mature this project was getting, the more data was being generated. With two people now able to recreate and generate everything from soup to nuts, we also needed to improve our quality assurance and reviewing tools. These tools currently fall into three distinct areas:

  • image (counter) quality
  • module data quality
  • map quality
  • data quality

To track counter changes and/or image quality, we use a tool to create a checksum of the generated image data for each counter. With this tool in hand, we can detect when there has been a change and whether it is a desirable one.

To track module data quality (aka content), an HTML review page is now generated which not only shows the counters generated, but it also tracks the formations they belong to as well as the game box information that contains scenario information, such as setup and reinforcement information.

To track map quality (aka metadata), an HTML file is generated that can graphically display a rendition of the map based solely on its metadata. Comparing this with the artistic map allows the module designer to ensure that the game engine sees what the player sees with regard to the map(s). Right now, this is necessary as the metadata is hand-generated by eyeball. In the future, the metadata generation and map creation will be done by a single tool.

XML Document Schemas

A computer-assisted wargame system requires another dimension of data and data integrity checking. It is one thing to write PDF documents for a game’s rules and play books. It is another to provide rules and play book, aka scenario, information in such a way as to be human-readable in the form of documents (HTML in our case) and be computer-readable (XML for the sake of the computer).

While we’re not quite at the stage where we can go from soup to nuts and have a game client “understand” rules as we know them, at least in an automated way, Paul and I are using XML to describe rules, play books, mission briefs, charts and client-readable scenario information using XML files.

As I’ve written in months past, using XML for our rules (and charts etc…) allows us to generate HTML documents (and documents in other formats) and player document packs and we can more easily track document changes.

Another use of XML is in our game module’s game box files. If you’re a Vassal user, you may be aware of the game box concept. In essence, a module’s game box describes all of the resources (items) that a TOCS game module will need to allow you, via the game client, to play the game. Orders of battle, reinforcement schedules, weather, the types of dice to be used, whatever… it’s all in there.

In the future, the game box will be part of a module creation tool set. But, that tool set is a long way off. In the meantime, however, we needed to ensure that our hand-edited XML documents need to checked for correctness: both syntactically and schematically.

Enter Relax NG. While the XML ecosystem supports a system called XSD (XML Schema Definition), the XML used to define the schemas of XML documents using Relax NG is more straightforward. While Paul and I had stored our combined knowledge of what we felt should be in the various XML documents we were creating into a private Wiki system, we needed an automated way to check our XML documents, especially as these files were beginning to hit thousands of lines of tagged document code.

I had the easy part; I wrote a 57-line script to interface with a pre-existing RubyGem called nokogiri that supported RelaxNG-style schema checking. Paul, on the other hand, had to convert our Wiki notes to an XML schema document with which to use to check our documentation and game box XML documents. (Again, Paul deserves a medal for this work.)

Utah Beach

Our most recent work was the continuation of development of our largest game module to date: Utah Beach. With over 1300 counters and multiple scenarios covering nearly a month of action on the Cotentin Peninsula, Paul and I were able to test the process improvements we had made by working on a large module that had only seen skeletal development in the past. Considering that this work involved new types of units as well as a German army that was composed of virtual spare parts, we were able to get the module to a workable state in record time as compared to our previous efforts

Tying it all Together

As the old saying goes (originally), the proof of the pudding is in the eating. As of this writing, we can delete all of our generated files and rebuild, for all three modules: counters, PDF countersheets, run image integrity checks, validate game box XML document files, validate documentation files, generate HTML documentation packages (“docpacks”) for Allied & German players, and generate HTML review pages in three minutes. My laptop’s fan is a whirling dervish during this process, but it works.


While not the year we had envisioned, Paul and I were able to make significant process and quality improvements as well as bring our largest game module to date to a testable state. As 2020 draws to at close, we are both beginning work on the web client and its server-side support code. With up to five languages in play (Ruby, C, Javascript, XML/HTML, SQL), this effort is every bit as challenging as we thought it would be. However, our morale has never been higher and we’re looking forward to more progress in 2021.