Emergent Behavior and the Military

07 July 2018 by pencechp • permalink

I’ve been thinking for a while now about a parallel between group behavior in the biological context and some of the research that I’ve been doing on questions of the application and ethics of contemporary weapons technologies. We often talk about behaviors as “emergent” from groups, but it’s genuinely unclear what we mean when we do.

We might only mean something pretty mundane and uninteresting. That is, some properties of groups are simply emergent in the sense that you have to have a group of objects in order to have the property in the first place. Only groups of people have social structures, for example. It usually doesn’t make sense to talk about those social structures only in the context of individual people and their behaviors.1 Nothing there is, I think, particularly controversial, and certainly nothing is metaphysically spooky.

Some group-level properties are really strange, on the other hand. Some phenomena, like the way in which Rayleigh-Bernard convection cells form (that’s an image of solar convection cells at right), seem to exhibit strange properties. It’s almost as though the properties of the convection cells cause the individual molecules of fluid to move in such a way that the convection cells are generated – a sort of circular causal chain between levels of organization. Whether or not this is ultimately a coherent way to think about these cases remains a matter of some debate in the philosophy of causation, and particularly in the philosophy of mind.2

Away from these two extremes – the uninteresting cases and the controversial cases – there lies a vast middle ground of emergent properties. These are slippery, and difficult to define; philosophers of biology struggle with the notion of emergence that is at play here. (I’m currently working on a project to try to approach the definition of emergence empirically – watch this space!) But at least one relevant way that we might narrow in on this notion of emergence is to focus on the element of surprise, or more formally, the difficulty of prediction of emergent phenomena.3 For example, in systems that exhibit chaotic behavior, it can be next to impossible to predict that a very small change in previous conditions (the butterfly flapping its wings) would lead to a much larger change in future conditions (the proverbial hurricane). It is, of course, possible in principle to predict this kind of behavior in deterministically chaotic systems – but their complexity and our inability to completely model all of the contingencies at work makes it surprising that such behavior emerges from the system’s tiniest parts.

Human beings are another classic source for emergent behaviors in large groups. From phenomena as mundane as players in a video game finding fun ways to bend the rules and produce amazing outcomes to far more serious cases like mob psychology or murderous rampages – all of these seem also to be emergent properties of precisely the sort we’re considering. The creators of Minecraft, for example, had no idea that the rules of their world could be used to produce a universal Turing machine, just as fans at a concert or sporting event have no idea what small event might precipitate a dangerous stampede. Again, examples like these can be explained in terms of individual-actor behavior after the fact. But they remain hard to predict based upon the behavior of garden-variety rational individuals.

Like all human behaviors, however, you can build group structures that control for such emergent properties. Arguably one of the finest structures for controlling and harnessing of human emergent properties is the contemporary military. Socialization and training are tailored – through thousands of years of social and cultural evolution – to suppress some of those emergent behaviors (the usual properties of human mobs) and to cultivate others (particularly to do with bonding, camaraderie, and leadership structures), in careful proportion, with the goal of obtaining a very particular end.

Let’s turn to contemporary military technology. I’m one (and one particularly undistinguished) philosopher among many who have spent time worrying about the ethics of robotic, AI, or cyber-weapons. One of the key worries here is about emergent behaviors, especially in contexts where multiple high-tech systems are deployed together. What happens when there are thirty AI units working together in a pack, connected to one another rather than to a centralized controller? Will small quirks in their programming (the metaphorical butterfly) produce unexpected and large emergent behaviors? Probably. We have already seen very similar behavior in “flash crashes” caused by electronic trading algorithms. We’ve also seen this problem in the wild for cyberweapons – Stuxnext was only discovered, for example, because one portion of it spread much farther than it was intended to. (One can imagine what this could look like if extended to multiple cyberweapons, each set to automatically attack when provoked.)

One might, of course, argue that there is nothing new under the sun. There are emergent properties in the case of human groups just as there are emergent properties for robotic groups. But this, I think, misses one crucial point. We’ve been building a system for a very long time, with extremely intensive testing, to contain the effects of emergent properties in the human case. We’re quite good at it, in fact. But it strikes me that there is no reason whatsoever to think that the same kinds of mitigation strategies we have already used for humans will continue to be effective when applied to artificially intelligent systems.

And responding to this situation is, almost by definition, incredibly difficult – one of the definitional characteristics of emergent behaviors being the difficulty of predicting htem in advance. The traits expressed by individual AI agents (notably, how most isolated testing usually takes place) may or may not even be interesting to a researcher until undesirable emergent behaviors have already appeared.

We can engage in simulations that might hope to tease out some of these results, though this, too, will be rather challenging. Emergent behaviors appearing in simulations might at first be taken to indicate bugs in one’s simulation, given that those behaviors often involve “corner cases” in the explicit programming of the agents themselves. If nothing else, the issue reinforces the need for extensive monitoring of AI systems, whether that amounts to active “in-the-loop” control or something less intensive.

Swarming and networks seem to be a particularly good method for gaining real advantage from twenty-first century weaponry – we thus need to be careful to ensure that such uses don’t introduce a host of undesired consequences.

  1. I don’t mean by that to say that these properties of social structure might not be, at the end of the day, reducible to only facts about individual people. That kind of question about reductionism remains hotly debated across philosophy. But it would, to continue with the example, be weird to talk about the properties of individuals that grounded social structures if you didn’t know or care that the individuals formed social groups – it seems unlikely that those collections of individual properties would be interesting on their own. 

  2. There’s a great blog post about downward causation – with evidence and links to both supporters and detractors – that you can find over on The Brains Blog, by Eric Thomson. Also, one of the Bechtel-Craver examples draws from Confederacy of Dunces, so it’s got that going for it. 

  3. This idea goes all the way back to some of the earliest invocations of emergence. See, e.g., C. D. Broad’s (1925) The Mind and Its Place in Nature, London: Routledge & Kegan Paul.