Highlights

In brief

The Disentangled Graph Convolutional Network (DGCN) model significantly improves the accuracy of trajectory forecasting in congested environments by effectively encoding and disentangling spatial and temporal interactions between robots and moving objects.

© Unsplash

Robots gracefully weave through the crowds

23 Jan 2024

A new machine learning framework helps train robots to safely navigate pedestrians in crowded, real-world settings.

Navigating crowded public spaces at peak hours can sometimes feel like a frustrating maze, where every step forward is met with obstacles of bustling, unyielding people. If this is already challenging enough for humans, imagine its magnitude for robots.

Predicting the future path or movement of objects, or trajectory forecasting, is thus a critical in-built mechanism for robots designed to operate autonomously in dynamic, real-world settings.

“Predicting accurately where pedestrians will move helps robots to navigate in crowded areas without colliding into humans, and maintain their personal space,” explained Niraj Bhujel, a Research Scientist with A*STAR’s Institute for Infocomm Research (I2R). Additionally, robots equipped with trajectory forecasting can navigate more efficiently by planning a path that side-steps obstacles to minimise delays.

Graph Convolutional Networks, or GCNs, have emerged as powerful machine learning tools for programming trajectory forecasting in complex, highly dynamic environments. In essence, they give robots spatial awareness, but they are known to lack accuracy for predicting the future behaviour of moving objects in unfamiliar settings.

Bhujel and I2R colleague, Wei-Yun Yau, hypothesised that breaking crowd interaction data down into spatial and temporal factors can help next-generation machine learning models achieve more precise forecasting of the trajectory of pedestrians.

Their work culminated in the Disentangled Graph Convolutional Network (DGCN) which features neural message passing, a way in which information is shared and processed between different nodes in a network. When a node receives a message from its neighbours, it combines it with its own information to get a high-resolution picture of the robot’s surroundings.

“Such combinations provide a special lens to the model that shows where humans are and how they change their actions over time and space,” explained Bhujel, adding that the DGCN’s initial prediction is also frequently corrected to improve the reliability of the final prediction.

This innovative approach paid off, with validation data demonstrating that the DGCN showed superior accuracy at a range of prediction horizons compared to conventional GCNs. They also found that the model can effectively account for the influence of pedestrians or vehicles within a radius of up to 8 m without any loss of performance.

Building on positive momentum, Bhujel and team are currently working on answering unsolved questions on how to computationally capture an individual human's movement intention within crowded places. Together, these advancements have the potential to enhance the efficiency and safety of autonomous navigation systems as a means of integrating robots into human-centric spaces.

The A*STAR researchers contributing to this research are from the A*STAR’s Institute for Infocomm Research (I2R).

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!

References

Bhujel, N. and Yau, W.-Y. Disentangling crowd interactions for pedestrians trajectory prediction. IEEE Robotics and Automation Letters 8 (5), 3078-3085 (2023). | article

About the Researchers

Niraj Bhujel joined A*STAR’s Institute for InfoComm Research (I2R) as a Research Scientist in March 2022. Bhujel, a recipient of the A*STAR Singapore International Graduate Award, obtained his PhD in Electrical and Electronics Engineering from Nanyang Technological University, Singapore in July 2022. His research focus lies in the intersection of deep learning, robotics and computer vision. He has published his research work in multiple human tracking and path predictions in respected journals and conferences such as RAL and IROS. Bhujel’s commitment to advancing technology and solving real-world problems is a driving force in his career.
View articles

Wei-Yun Yau

Department Head, Robotics and Autonomous Systems

Institute for Infocomm Research (I2R)
Wei-Yun Yau received his PhD degree (1999) from Nanyang Technological University, Singapore. Currently he is with the Institute for Infocomm Research, A*STAR as the Head for Robotics and Autonomous Systems department. Together with his team, Yau has developed several award-winning robotic solutions and spun-off Sensorbot Pte Ltd. He is the recipient of the TEC Innovator Award 2002, Tan Kah Kee Young Inventors’ Award 2003 (Merit), IES Prestigious Engineering Achievement Awards 2006 and 2023, Standards Council Distinguished Award 2007, and MTI Firefly Silver Award 2020 for Most Innovative Project/Policy. Yau’s research interests include intelligent robots and biometrics; he has 15 patents granted and 200 publications.

This article was made for A*STAR Research by Wildtype Media Group