Anyone who has had a kitten or puppy knows just how energetic and playful they can be. Although this play can be endearingly cute, that is not its purpose. In fact, for their undomesticated cousins, it is a matter of life and death. Through instinctive play our pets are honing skills crucial to stalking prey, working in a group, and establishing social hierarchies.
Cats and dogs aren’t the only small mammals which engage in play, as any parent can attest. While children playing in a sandbox aren’t learning to hunt prey, they are still developing deep intuition about the world. In this sense, play represents the same basic process playing out in a different developmental context; there are differences between wet and dry sand, you can only dig so deep and build so high, tools help us build, sharing tools with playmates can be useful.
Neither does play have to stop when we’re young, and we can continue to derive experience from it, including a shift from physical to system intuition. While there’s no lack of research on the topic, one example from my own experience stands out. There’s a cute but mechanically deep colony-building game, Oxygen Not Included, by Klei. In this game, the player must manage a slew of environmental and resource issues, including temperature control and power generation. One piece of kit which they can use is a ‘steam turbine’ which, to a first approximation, is very realistic. The important thing is that by playing with it, the user gains an intuitive understanding of how the system works; they may not be able to quantify it, but they will develop a sense for when their steam isn’t hot enough or when their cooling system is insufficient. Despite this, many players do in fact dive into the numbers behind the game mechanic. I suspect a fair portion of them would run screaming from a thermodynamics classroom and procrastinate indefinitely on homework; meanwhile they spend their free time developing equations describing the game mechanics and producing shared spreadsheets. All in the name of play.
Play can be a powerful tool for driving intuition, whether informally as above, within education, or for informing policy makers and regulators. To be clear, let’s define what I mean by ‘play’ here. Play is approachable; you can ‘jump in’ without a lot of preparation. As part of that, it is low risk, you can try out ideas with few negative consequences. Your actions do, however, produce changes and this feedback is ideally clear and directly related to what you’ve tried. Because of all of this, play is an exploration-rich environment where you might try wildly differing strategies but also spend time fine-tuning details of a promising approach. None of this means that play is necessarily a trivial ‘toy’, nor does it have to involve ‘gamification’.
I also, perhaps controversially, do not consider play to be necessarily ‘fun’. While play certainly can be fun, the essential quality is more akin to it being engaging. The intrinsic motivation associated with play lends itself to getting lost ‘in the zone’, and this is the same whether it’s in front of a video game or experiencing objective hours of data analysis as a subjective dozen minutes.
This same pattern, where approachable systems encourage experimentation, appeared when I began thinking about my own outreach based on my research at the time. I was (and am!) very interested in tiny poison-tipped spears used by some bacteria (the Type VI Secretion System) and I’ve modelled scenarios with those spears in computer simulations where 100’s of thousands of individual bacteria are simulated. None of those topics are likely to be found in any national curriculum, but they touch on many concepts which are present: forming and testing predictions, understanding figures, knowing that trade-offs are fundamental to ecological strategies. I created a ‘virtual lab’ using pre-run simulations where students can ‘play’ with parameters, particularly the energetic cost of maintaining poison-tipped spears. (demo, not mobile friendly:, code repo ). As the students go through the lesson, they develop an intuition for how the system behaves, and the capping activity is to test a hypothesis about tipping points in ecological strategies. This experience convinced me that simplified, interactive systems can create intuition even in complex domains, which suggests a broader applicability beyond education.
Similar ‘playful sandboxes’ can be used to help policymakers and regulators develop an intuitive understanding of complex systems, to be used as a complement to data-driven decision making. While policies should not be made on gut instinct, neither should they be made blindly based on models, all of which famously lie. Unlike traditional scenario tools, which aim to answer specific predictive questions, these sandboxes are designed to show structural relationships and trade-offs in a way that is exploratory rather than prescriptive.
There are two ways, somewhat overlapping, in which intuition, derived from play, can provide context to policymakers: domain familiarity and scenario exploration.
Domain familiarity allows subject matter experts and policymakers to speak a common language. While this includes terms and jargon, it also includes the ability to understand broad concepts and data visualisations used in the domain. Policy makers working on ecological issues would, after playing, recognise terms like ‘beta diversity’, figures like an ordination plot, and develop the instinct to ask questions regarding context, like whether seasonality is relevant.
Scenario exploration goes towards understanding interactions within the system. In this context, sandboxes would provide controls which allow users to adjust strategies and the underlying scenario. A crucial difference from data driven models is that these systems can use simplified models with hypothetical but realistic data and parameters defining relationships, because the goal is not to predict outcomes but to let users experiment with general system dynamics. This is particularly useful for instances where the ‘real’ models are too expensive to power the sandbox. It is also useful for the many situations where a truly predictive model simply does not yet exist or is not sufficiently general.. Even where such models are available, the user facing portion of a sandbox powered by that model will generally be different when the goal is to generate broad understanding rather than specific quantitative predictions.
As an example of scenario exploration, consider a sandbox created to help a regulator understand the trade-offs between rapidly deployable and scalable, but somewhat inaccurate, tests vs, much more accurate, but less logistically scalable tests. In the context of detecting contaminated food lots, the user can see how in one case the lower fidelity test kit results in a large amount of food waste due to false positives but prevents many illnesses when compared to the accurate kit, and that this due to various interacting factors. By playing with those factors and the underlying contamination rate, the user can develop an intuitive grasp of a major concept: no analysis is necessarily always ‘best’.
One danger with scenario exploration is in its misuse as a predictive model. The user should not use a sandbox to generate quantitative rules. For example, if the regulator noted the threshold false positive rate in which low-accuracy tests are preferred, they might codify that value into a real-world rule. This can be mitigated with clear messaging about the intended purpose, including explicit in-system text such as “Simulation output is not predictive, for training purposes only.” It also helps to design the interface so it does not imply unwarranted precision, for example by avoiding unnecessary numerical detail (including avoiding numbers altogether), using simplified or illustrative units, and presenting results informally rather than as formal analysis.
Creating such sandboxes is not a novel idea, but they do seem underutilised. I suspect this is because they require a combination of domain expertise, technical capacity to develop and deliver the tools, and a mechanism connecting them to policymaker training (including ease of delivery) While sandbox based play has been adopted in specific areas, likely due to serendipitous confluences of the above factors, it is still far more common to hear the (important!) question of “upon which data did you base this decision “ than “how did you form an understanding of the system so you can interpret the outputs and limitations of the data and model?” I contend that the latter question is equally important, should be asked more often, and that sandbox-based play is one route towards an answer.
Parts of this work, particularly the Type VI Secretion System model and outreach were funding by NSF PRFB 2007151. I further developed the idea of play as a tool for policy decision support as part of my participation in EU COST Action 23152 Biofilm Regulatory Toolbox. The opinions expressed here are my own.