The future of assistive robots
Did you know?
- Roboticists are exploring different types of assistive robots that could help people live independently for longer, such as by helping with household tasks
- Large language models such as GPT-4 (the AI model behind one of the latest iterations of ChatGPT) are increasingly guiding robots’ behaviour, to help them interact with people more intuitively
- Lightweight robotic clothing might one day be able to help people climb stairs, stand for longer, and sit and stand from chairs more easily
Maintaining fusion reactors. Scaling wind turbines in the North Sea. Packing online shopping orders. Welding parts onto cars in factories. Scouring the surface of Mars for signs of ancient life. So far, robots have passed all of these tests with flying colours, whether they’re wheeled, flying, walking, or just an arm bolted to the floor.
In these cases, the robots in question are either working well away from humans, or at least at arm’s length. But for as long as the field has existed, robotics researchers have been keenly exploring how robots can interact safely (and usefully) with people. The holy grail, although it may still seem like the stuff of sci-fi films, is developing robots that assist people at home.
With an ageing population and far fewer care workers than we need, the UK government and others are exploring robots in social care, also known as assistive robotics. Researchers maintain that they could play an important role in supporting people to thrive as they age, while alleviating pressure on the care sector. But first, there are technical challenges that must be addressed before robots can be used safely in settings such as homes, residential homes and hospitals. Ethics are equally important – especially how we ensure that assistive robotics are never seen as a substitute or replacement for human care.
What are assistive technologies?
Assistive technologies are designed to help people whose needs are not otherwise met by the existing built or social environment, such as disabled people, older adults or people with long-term health conditions. They can positively impact many different areas of people’s lives, from school and work, to getting around, cooking and leisure activities.
Whether we realise it or not, most of us use assistive technologies regularly. Voice-to-text and voice recognition technology are among the many everyday technologies that help show why making reasonable adjustments for disabled people benefits everyone. Other examples include devices such as hearing aids, objects such as a wheelchairs, and digital technologies such as screen readers.
Assistive technologies don’t have to be as ‘high tech’ as these examples. After visiting Google’s Accessibility Discovery Centre in London, Stephen Morris, a campaigns officer at Sense who is deafblind, wrote about a self-balancing spoon for people with hand tremors. He emphasised the need to always involve disabled people when designing and developing technology. (And indeed, many disabled people take a DIY approach, developing their own assistive technologies or adapt existing devices or objects to better meet their needs.)
In safe hands
Robots designed to carry shopping or physically assist with household tasks could become the equivalent of household appliances that will help older people continue to live independently. While we are still far from seeing them in our homes, with more engineering and robotics research, they may become a reality in the future.
One of the most important challenges to address first is safety. Assistive robots must be safe by design, as they will be working very close to people, often touching them. At the extreme end, they may potentially be handling sharp objects – a fork, if they are helping someone to eat a meal, or a razor, for shaving assistance. These scenarios underline the fact that robots must be designed so that they are unable to make any mistake that may harm someone. Even with a less seemingly ‘risky’ example such as helping someone put on a jacket, a robot accidentally forcing their arm could injure them.
Improving safety may be achieved in different ways. On one side, it will mean the robot will have to be able to monitor the person and understand the situation, to react quickly to any potential harms. For instance, the robot may stop, slow down or back away to avoid a collision. Another route is through the controller, the hardware that guides robots’ movement. Ordinarily, robots compute and then follow set movements. A ‘compliant’ robotic controller would allow the robot to follow a set movement trajectory, but crucially, give way to being pushed or pulled, similar to a human. Currently, this means solving a delicate trade-off, as stiffer control modes are more precise, whereas compliant controllers are less precise.
Unsurprisingly, progress on all of this might come down to learning from the expertise of caregivers. In March 2024, a team at the University of York’s Institute for Safe Autonomy developed a dressing robot that uses AI to learn from a demonstration performed by a human. One learning was the fact that two hands were better than one (which was previously the norm for dressing robots). Jihong Zhu, the lead researcher, explained that using just one robotic arm could force the person to move or bend their own arm in an awkward or uncomfortable way. After opting to design a two-handed robot, they then also built in algorithms that would allow the gentle touch of a human to stop its actions.
How do robots navigate their environments?
Usually, robots detect their surroundings to move around obstacles with 2D laser sensors in their base, rather than cameras. However, camera-based navigation methods also exist, such as those used by some robotic vacuum cleaners. Other kinds of sensors, such as sonar and distance sensors, may also be combined with vision and other types of sensor through what is known as sensor fusion. This all depends a lot on the kind of robot and its hardware.
To detect objects and people, 3D cameras such as the Microsoft Kinect are often used. These cameras provide both 2D colour images and depth maps (where every pixel in the image describes the distance from an object to the camera). Both images can be combined to create 3D point clouds, which are useful to visualise the data and represent the real-world objects and surfaces.
Safety is also tied to perception. Assistive tasks often involve dynamic and deformable objects, such as clothes or food. These kinds of objects are particularly hard for robots to perceive and model, not to mention to manipulate – whether picking up an orange or a T-shirt. Perception is impacted by the surroundings, too. Operating in someone’s real-life living room – perhaps with several chairs, a coffee table and a cat to navigate – is very different to operating in a specially designed robotics lab. Hoping to address this, a team of roboticists, architecture researchers and healthcare specialists from several universities, including UCL and Cardiff, are analysing the layouts of ‘typical’ residential homes to create practical guidelines for robot developers.
A promising route under development at a research hub based at the University of Bristol is robotic clothing – which combines elements of robotic exoskeletons with everyday clothing. “Smart robotic clothing has the potential to act as an enabler of movement, activity and independence for people with disability and frailty,” explained Professor Jonathan Rossiter, a project lead, in a press release. Launched in October 2024, one of the project’s aims is to develop robotic clothing that will help people climb up stairs, walk further and more easily stand up from a chair. This could easily make for a cumbersome, bulky device. How might engineers go about designing something people actually want to wear?
Clues could lie in a prototype from the same researchers shown at the British Science Festival in 2018, called ‘The Right Trousers’. These lightweight trousers apply small electrical pulses to stimulate muscle groups in the wearer’s legs, and included a stiffening knee brace based on thermally sensitive graphene, to help people stand up for longer.
Human, but not too human
Perhaps the most important challenge of all is how robots and humans communicate. We’re used to communicating with other people, but not so much with robots. In any assistive tasks, effective, natural, and quick communication will be crucial to provide smooth assistance. While robots that are too like humans are often considered creepy, this doesn’t mean that robots shouldn’t be social. Some people may find it easier to communicate their needs by speaking to a robotic assistant, and having it talk back to them. Making communication simpler in turn helps the robot to complete their task more easily.
In January 2024, engineers from a research consortium including Heriot-Watt University’s National Robotarium took socially assistive robots to a hospital in Paris for a trial to help staff with routine tasks. Thanks to large language models (also known as LLMs, which are the type of AI behind ChatGPT) embedded in the robots, they could smoothly greet patients, answer questions, and provide directions – even following a conversation with several people. However, the consortium noted challenges relating to their ‘ease of use and conversational complexity’, so there’s more to be done for them to be a seamless fit in real-world healthcare settings.
On top of this, if they’re to be truly helpful to people, assistive robots must be able to understand the context of the task. For example, if a robot is helping someone eat, it needs to ‘know’ to hold off when the person is speaking or chewing. Engineers could inform their research in this area by talking to carers, who take in many different cues, from visual and verbal to tactile when assisting a person.
Recent advances in AI such as LLMs and vision language models (which are similar to LLMs, but also take image data into account) look very promising to tackle these issues too. However, it is still unclear how to model and understand social situations so that robots can act according to human expectations. Unlike humans, robots (and even our most advanced AI models) still lack common sense. My group at King’s College London has recently published work exploring whether LLMs can align to people’s social intuitions – our preferences and values. More advanced models, such as GPT-4, have improved on their predecessors, so it stands to reason that LLMs will continue to improve in this way, although we are still far from achieving human-level performance.
Co-creation and co-design of assistive robots should consider feedback from older people, the potential future recipients of assistance, as well as carers, who will be working alongside them.
Because of this, they also lack the much-needed ability to adapt to a person’s needs. This means adapting to and understanding their needs, but also their preferences – which is important for comfort. Such an adaptation must also be consistent throughout a person’s lifetime: the robot should ‘remember’ if a person likes to dress in a certain way. At the same time, people naturally change over time and so will their needs and preferences. The robot should change accordingly. Personalising behaviour to specific people is what carers do when assisting someone and helps make robotic assistants both more effective and more socially acceptable.
Adapting assistive robotic behaviour to people’s preferences was the subject of my PhD thesis at the Institut de Robòtica i Informàtica Industrial (CSIC-UPC). I looked at the types of preference that were relevant to certain assistive tasks, such as helping people eat, or put on shoes or a jacket, then used these to guide the robot’s decision-making when carrying out the task. After implementing these changes to the robot’s behaviour, surveying people showed greater satisfaction in their interactions with the robot. We are likely to see work in this area continue: so-called ‘person-centred’ autonomous robots are also the focus of a recently announced research centre at the University of Edinburgh. This will investigate frameworks for autonomous decision-making, as well as researching people’s reactions and behaviours.
In terms of maturity, social robots – designed for companionship – are ahead of the curve. Several research labs are studying social robots in residential homes. One of the leading contenders here is PARO, a commercially available fluffy white baby harp seal robot. Like a therapy pet, but easier to care for and less unpredictable, it’s designed to soothe and improve mental wellbeing for older people with dementia. Yet opinions in the field are split. Many researchers studying ageing believe social robots could hamper opportunities for genuine human contact and raise the potential for privacy and personal data risks.
Pricing it up
Cost may be the ultimate barrier to the adoption of assistive robots. Currently, robots are still highly specialised equipment and their cost reflects this. PARO is marketed at £6,000, for example. While some specific-purpose robots exist for some tasks (such as assisted eating, or vacuuming), their cost is still high and a person might need different devices to assist them with different tasks, while a general-purpose assistive robot might be able to help them with different tasks.
Importantly, assistive robots should only form a small part of the care environment, as human interaction in care settings is vital to people’s emotional wellbeing. Co-creation and co-design should consider feedback from older people, the potential future recipients of assistance from robots, as well as carers, who will be working alongside them.
Maybe, for example, people don’t mind a robot dispensing medication, but prefer a human carer to help them eat. And perhaps carers would welcome manual support from robotic equipment designed to lift and transfer people – if they are reassured that it is safe for people in care.
Assistive robots therefore have a huge potential to change and improve the lives of many people, but lots of work and research in engineering are still needed to make them a realistic option to support people in the future.
Contributors
Dr Gerard Canal is a lecturer in autonomous systems at the Department of Informatics, King’s College London. He researches how robots in home environments can be made more autonomous, to be able to further assist people in living independently for longer. His research interests involve robot explainability (to make robots explain their behaviours to humans), goal reasoning, and planning applied to robotics. Before joining King’s, he did a PhD in robotics at the Institut de Robòtica i Informàtica Industrial, IRI (CSIC-UPC) in Barcelona, with research on adapting assistive robot behaviours to user preferences.
Keep up-to-date with Ingenia for free
SubscribeRelated content
Technology & robotics
When will cars drive themselves?
There are many claims made about the progress of autonomous vehicles and their imminent arrival on UK roads. What progress has been made and how have measures that have already been implemented increased automation?
Autonomous systems
The Royal Academy of Engineering hosted an event on Innovation in Autonomous Systems, focusing on the potential of autonomous systems to transform industry and business and the evolving relationship between people and technology.
Hydroacoustics
Useful for scientists, search and rescue operations and military forces, the size, range and orientation of an object underneath the surface of the sea can be determined by active and passive sonar devices. Find out how they are used to generate information about underwater objects.
Instilling robots with lifelong learning
In the basement of an ageing red-brick Oxford college, a team of engineers is changing the shape of robot autonomy. Professor Paul Newman FREng explained to Michael Kenward how he came to lead the Oxford Mobile Robotics Group and why the time is right for a revolution in autonomous technologies.
Other content from Ingenia
Quick read
- Environment & sustainability
- Opinion
A young engineer’s perspective on the good, the bad and the ugly of COP27
- Environment & sustainability
- Issue 95
How do we pay for net zero technologies?
Quick read
- Transport
- Mechanical
- How I got here
Electrifying trains and STEMAZING outreach
- Civil & structural
- Environment & sustainability
- Issue 95