@kfuruya pointed out that I made a big unstated assumption in a previous comment, so opening up a new ticket to discuss.
Basically: how close a coupling should there be between the model and the visualization? As I see it, the two basic options are:
- The model object and the browser visualization stay in step with one another. The model runs one step, sends the visualization data to the browser, which renders it. The browser then sends a message back to the server, which runs the next step of the model, etc.
- The model and the visualization run somewhat independently. In practice, this presumably means that the model runs very quickly and generates a queue of visualization states (which might be stored in the server, or in the browser).
There are advantages and disadvantages to both. Off the top of my head, some of them are
- Tight coupling can be good for experiments where the user can change parameters at runtime and see the results right away. Otherwise, you either disable that feature altogether or have some really unwieldy way to fake it.
- Loose coupling makes it easier to allow 'scrubbing' forward and backward through a model run, which I think can be immensely powerful (and as far as I know would be a new feature that doesn't exist in any widespread ABM framework).
- Tight coupling is probably easier to code, since we don't need to worry about where to cache the data, updating the cache from both ends, etc.
- Loose coupling could allow a model to 'buffer' and make the visualization smoother, important for really time-consuming models.
- Tight coupling can give an immediate sense of how fast / slowly a model is running, particularly if its running slower than the user's target frame rate.
In a perfect world (/ final product), we would have both. Writing the advantages / disadvantages down now, I'm actually changing my mind and leaning towards looser coupling. Reasons are:
- Updating parameters on the fly can be good for experimenting, but encourages bad (read: not self-documenting) science. If you want a different behavior, you should either reset the model and run with new parameters, or explicitly add in a mid-run parameter change.
- I don't want to deal with writing a cache / queue thing, but I really want a visualization scrubber.