Skip to main content
An illustration of a swarm of small drones, forming a "murmuration" resembling the shape of a bird.
Illustration for Ingenia by Benjamin Leon

Engineering swarm robotics

As an engineer and nature-lover, Dr Anna Ploszajski has always been fascinated by biomimicry: taking inspiration from nature to improve our human-made world. Here, she explores the engineering that goes into copying one of nature’s finest spectacles, and some of the surprising applications possible when robots work together.

Did you know?

🐦 Dive into swarm behaviour!

  • Swarms in nature (and robotic swarms, too) don’t have any kind of central coordination, or rely on members of the swarm playing a different role from the others (such as being a leader)
  • The robots that are being trained to become members of swarms are increasingly being trained using AI deep learning, a type of machine learning
  • Swarm robotics are being proposed for search and rescue after natural disasters, by militaries, and even to battle wildfires

Every winter evening, hundreds of thousands of starlings swarm from across Sussex to roost on Brighton’s piers, forming dark, liquid shapes across the sky. This sight – also visible in many other places in the UK over winter – is a grounding reminder of the natural world’s beauty and power.

Biomicking with materials

🍃 Some of the best ways we imitate nature

Some of Anna’s favourite examples of biomimicry are:

  1. Superhydrophobic coatings that emulate the waxy, hairy surface of a lotus leaf to make extremely waterproof and self-cleaning surfaces.
  2. Wind turbine blades made more aerodynamic by copying the serrated edges of humpback whale fins.
  3. Solar panels with double the efficiency thanks to engineers copying the geometric surfaces on a butterfly’s wings.

Murmurating starlings, as well as swarming insects and schools of fish, are fantastic examples of the inspiration for a branch of engineering called swarm robotics. This is the approach of using large numbers of robots that coordinate with each other to create a moving mass whose behaviour is more than the sum of its parts. 

Examples of emergence

❄️ Things in nature that are greater than the sum of their parts...

Other examples of emergence are:

  1. Water molecules bonding together in formations that self-assemble under the right conditions to give rise to complex and symmetrical patterns of snowflakes.
  2. Neurons in the brain that fire electrical signals individually, but together give rise to consciousness.
  3. Life itself is an emergent property of carbohydrates, lipids, proteins, and nucleic acid molecules forming the simple structural building blocks of living things.

How swarms turn simple behaviors into complex ones

Swarming is what’s known as an ‘emergent’ behaviour in mathematics. It’s an example of a self-organising system, where simple rules followed by a group of individuals give rise to a larger mass that behaves as one. Importantly, it does not involve any central coordination or instruction. This also means the swarm doesn’t rely on any member playing a different role from the others, such as being a leader (although you can have a replaceable leader, as with a V formation of flying geese). Swarms can therefore cope with the removal or addition of individual members without it majorly impacting the whole.

An eight figure storyboard showing step by step as a swarm of "boids" (simulated birds) navigate four consecutive columns acting as obstacles.

Screenshots from Craig Reynolds’ work simulating swarms, in about 1987. This storyboard shows the different stages of swarms moving around cylindrical obstacles  © Craig Reynolds

Computer models of swarms were first developed in the 1980s by Craig Reynolds, a computer programmer from Chicago. While living in Los Angeles, Craig made a computer program about swarming ‘boids’, which simulate a flock of birds. 

“As a child, I was fascinated by what I would now call natural complexity: the turbulent shape of clouds, plant shapes, ant colonies, bird flocks,” says Craig, of his early inspiration. “In college [in the 1970s], I studied programming and graphics, and started thinking about such things through the lens of software simulation models.” 

Craig was inspired by Braitenberg vehicles. These very simple machines were first proposed by (and named after) the neuroscientist, Valentino Braitenberg, in the early 1980s. They move in response to signals (usually a light source) detected by onboard sensors.

How we anthropomorphise machines

🧠 The psychology of Braitenberg vehicles

Part of Braitenberg’s theory was how the vehicles exemplify the human tendency to anthropomorphise non-sentient machines. This is because humans observing robots being repeatedly drawn to or avoiding a stimulus can easily impart behavioural interpretations such as ‘aggressive’ or ‘cowardly’ onto them. 

“Watching a flock from the outside seemed too complicated. But when I imagined it from a bird’s perspective it seemed much less so. It seemed clear that as a bird, I would want to make small incremental adjustments in my speed and heading,” explains Craig. “If I got too close to a nearby flockmate, I’d want to gently steer away from them. To not get too close in the future, I’d want to steer in roughly the same direction as nearby flockmates. To stay with my flock, if I were on the outside edge, I would steer gently in, toward my nearby flockmates.” As it turned out, complexity could emerge from these surprisingly simple rules.

“I carried that idea in my head for a while. I thought those three rules (what I now call separation, alignment and cohesion) were necessary but wasn’t sure if they were sufficient. In 1986, I finally tried to implement it. Fortunately for me, those three rules produce flocking.”

Early motion tests of Craig's "boids", recorded on VHS tape in around 1986

“The unpredictable, improvisational nature of flock motion was a pleasant surprise to me,” says Craig. “It made the simplistic simulations feel much more ‘alive’ than I expected.” But while the idea sounds relatively straightforward, implementing it had its challenges.

“A problem I did not anticipate was the difficulty of tuning the model’s parameters. It had about 10 parameters, control knobs, to adjust,” he explains. “They all interacted (nonlinearly), so adjusting one always meant adjusting others, which meant adjusting others. It took many, many tests to converge on the desired type of motion.”

While this means essentially any type of motion was possible, it was also deeply complex to do. So, researchers have been automating the process using optimisation techniques, such as algorithms and machine-learning approaches.

Simulating swarms in movies and gaming 

Craig has since spent a large part of his career refining these models, including adding complications such as obstacles for the boids to avoid, or targets for them to reach. His career has led him to work in all sorts of applied areas, from graphics for gaming and the films Tron and Batman Returns, to autonomous vehicles. One example is a school of fish designed for the 1987 short film, Stanley and Stella: Breaking the Ice.

A digital rendering of several shoals of different coloured fish swimming against a blue background.

Craig Reynolds developed simulations for the PlayStation 3. The PSCrowd Chameleon Fish demo simulated 10,000 schooling fish at 60 frames per second

“In feature animation, use of these techniques generally has to do with background action: crowds in cities, herds of animals out in the wild, and battle scenes,” says Craig. “In games, the difference is that everything must operate in real-time (easy [today], quite challenging a decade or two ago) and the members of the crowd (nonplayer characters) usually need to react to the player’s character.”

Craig also developed virtual simulations of cars, technology that would lay the groundwork for that used in autonomous or driverless vehicles. “I was creating the dynamic elements in urban and highway environments, to which the simulated car under test needed to react. This was primarily vehicle traffic on the roads, and to a lesser extent, pedestrians on sidewalks and crosswalks,” he says. 

While there was a degree of crossover with his previous projects, these simulations had to go further still. “The crowds were quite similar to game worlds. The vehicle agents had [very] different ‘locomotion’ styles due to their mechanics: turning radius, stopping distance, and the like, in addition to being much larger and more dangerous.” 

This illustrates one key challenge in the field of swarm robotics: how to bridge the gap between simulation in the virtual world and real robots in the physical world.

One key challenge in the field of swarm robotics is how to bridge the gap between simulation in the virtual world and real robots in the physical world.

Swarm intelligence

Moving swarms from the digital world into a physical environment introduces lots of extra challenges, quickly pushing the limits of these so-called ‘classical’ algorithms. This is especially true if there are many robots involved. 

So, today’s models for swarm robotics are being informed by artificial intelligence (AI) – a field called swarm intelligence. It allows each robot to interact locally with one another (as in the early models), but also for members of the swarm to quickly learn about the surrounding environment and adapt to changes in it.

Robotics engineers at Northwestern University (US) in 2020 created self-organising robotic swarms. In simulation, the swarms comprised over 1,000 drones, while in the lab, the team were able to control a swarm of 100 real robots © Northwestern University

According to Amanda, researchers will instead pre-train the robot in simulation, and then fine-tune the behaviour of the actual robot in a “safe kind of sandbox-type situation”, such as in the lab. With this approach and the help of clever algorithms, robotics labs around the world are making solid progress towards addressing the many technical challenges that swarm robotics face.

That’s all well and good for individual robots, but how does it map onto swarm control? As Amanda explains, a decentralised model is most like natural swarms seen in nature. In such a scheme, there’s no overarching, ‘mastermind’-style control system. Each robot governs its own behaviour, with input only from nearby robots. This is how it works in the Prorok Lab’s research, too – each robot in a swarm is responsible for its own behaviour.

Two robotics engineers are viewed from above as they set up an experiment on miniature autonomous robotic cars.

In one of the Prorok Lab’s projects at the University of Cambridge, the team used AI-trained miniature car robots as a model for autonomous vehicles © Prorok Lab

Professor Amanda Prorok at the University of Cambridge uses neural networks to train robotic swarms. She explains the key training paradigms: imitation learning and reinforcement learning. “In imitation learning, you show the robot what it’s supposed to do in given scenarios … and the robot learns to copy it. And in reinforcement learning, you reward the ideal behaviour when the robot gets something right. For example, if you want a robot to find an exit of a maze, you can tell that robot, ‘Hey, when you find that exit, I’m going to reward you.’ Over time, it learns the right behaviour.”

This ‘training’ would take too long in the physical world, so the robots are often ‘trained’ in a simulated, computer world, and then this learning is transferred to the physical robots. But this leads to another problem.

“As we go about deploying our robots in the real world, we encounter something called the simulation-to-reality gap. Because the world that the robot encountered in simulation is not the same [as] the real world,” she says. “No matter how hard you try, it’s very hard to create photo-realistic environments in simulation with the right kind of lighting and everything … so when you give the robot its sensor, it’s suddenly encountering sunlight, and cloudy skies … and it doesn’t know how to deal with those conditions.” This particular case applies to robots with camera sensors. Other types of sensors, for example sonar or infrared, can suffer from other distortion factors. 

The Prorok lab is teaching robots by simulating (and learning from) collisions with other objects in virtual reality. Their results demonstrate that, after only a few runs in mixed reality, collisions are significantly reduced. © Prorok Lab

Real-world swarms

There are many exciting real-world applications for swarm robotics. Militaries have made some of the most notable advances in the area. US government agencies, such as the Defense Advanced Research Projects Agency (DARPA) and the Navy, are investing in developing swarms of uncrewed aerial vehicles and boats.

Civilian applications are also taking flight. Search and rescue missions often demand access to difficult-to-reach places, exploring unknown environments and solving complex geometrical problems (such as systematically searching a collapsed building). Swarms of small flying or ground-based robot systems are well-suited to such tasks. London-based startup Unmanned Life is one company developing AI software to manage drone swarms for search and rescue tasks, as well as for commercial security such as in ports. Similarly, Southampton-based Windracers is working on autonomous drone swarm technology to detect and fight wildfires. 

In a project led by Oregon State University, researchers are developing swarms to explore hard-to-reach underwater polar environments, where communications with the surface are limited. They hope that sending them to places such as the cavities underneath ice shelves will shed light on how ice melt contributes to sea level rise. 

Autonomous swarms have even been suggested for manufacturing, known as swarm 3D printing, and medicine, with microscopic swarms that could deliver pharmaceutical or surgical interventions inside the body.

Southampton-based Windracers is working on autonomous drone swarm technology to detect and fight wildfires.

The environmental cost of AI

🌍 Energy consumption and extracting rare materials

Environmental harms, too, are a concern of Amanda's. “People don’t realise how costly AI is from an environmental point of view. It costs a lot of electricity to train one ChatGPT model. So I think we need to think a bit more about, when you’re embarking on training a really big model, is it worth burning down half a forest to do that?” she says. 

“I think reuse and recycling of models needs to be taken more seriously, because it’s just not sustainable for the planet. GPUs [graphics processing units – powerful computer chips often used for scientific and AI applications] are made of precious metals, electronics are scarce, we’re blowing up mountains all over the world to generate these things, let alone the electricity you then need to run them, and cool the buildings they’re in … people don’t realise this backstage part of AI. We need people working in parallel to find solutions to those aspects, too.”

However, before these exciting applications materialise, engineers must address key challenges in the field such as keeping costs down, poor battery life and miniaturising the individual robots. Researchers share the goal of keeping the individual robots as simple and cheap as possible, to maximise scalability and power efficiency. Specific challenges such as programming adjustments in flight for drones blowing each other off-course also plague researchers, but programming involving AI is helping refine these systems.

AI, too, comes with its own problems. “I think one thing that people don’t really realise is that the larger the models are, the less interpretable they are, and the less we can verify and certify their behaviour. So it’s really difficult for us to guarantee what the model’s going to do in certain situations,” says Amanda.

She draws a parallel with large language models such as ChatGPT, and the unexpected ways they can behave. “It’s the same thing with robots. But the thing with robots is, now we’re not talking about virtual harm, we’re talking physical harm, right? They’re moving in a physical world. Their actions are not words, their action is motion. So, I think we have to be a bit careful about that.”


Amanda Prorok is Professor of Collective Intelligence and Robotics in the Department of Computer Science and Technology at the University of Cambridge. Her mission is to develop solutions for collective intelligence in multi-robot and multi-agent systems. This research brings in methods from machine learning, planning and control.

Craig Reynolds is an unaffiliated researcher and retired software developer. He is best known for the ‘boids’ model of flocking and similar collective motion, and has researched the evolution of camouflage. His research publications have been cited 18,000 times. His feature film work won AMPAS’s Scientific And Engineering Award in 1998.

Dr Anna Ploszajski is an award-winning materials scientist, author, presenter, comedian and storyteller based in London. Her work centres around engaging traditionally underserved audiences with materials science and engineering through storytelling, as well as delivering training courses and undertaking academic research storytelling in science.

Keep up-to-date with Ingenia for free