Arm-Pushing Controller
The workflow of the RL-based arm-pushing controller from training to deployment.
Interactive navigation is crucial in scenarios where proactively interacting with objects can yield shorter paths, thus significantly improving traversal efficiency. Existing methods primarily focus on using the robot body to relocate obstacles during navigation. However, they prove ineffective in narrow or constrained spaces where the robot's dimensions restrict its manipulation capabilities. This paper introduces a novel interactive navigation framework for legged manipulators, featuring an active arm-pushing mechanism that enables the robot to reposition movable obstacles in space-constrained environments. To this end, we develop a reinforcement learning-based arm-pushing controller with a two-stage reward strategy for object manipulation. Specifically, this strategy first directs the manipulator to a designated pushing zone to achieve a kinematically feasible contact configuration. Then, the end effector is guided to maintain its position at appropriate contact points for stable object displacement while preventing toppling. The simulations validate the robustness of the arm-pushing controller, showing that the two-stage reward strategy improves policy convergence and long-term performance. Real-world experiments further demonstrate the effectiveness of the proposed navigation framework, which achieves shorter paths and reduced traversal time.
The workflow of the RL-based arm-pushing controller from training to deployment.
Comparison of interactive navigation with and without active arm-pushing mechanism.
The planning algorithm accounts for the time cost of pushing obstacles and automatically selects the time-efficient path, even if it is slightly longer in distance.
In addition to cubic objects, the arm-pushing controller can also effectively manipulate cylindrical obstacles, highlighting its versatility in handling diverse shapes.
The controller is also able to push rectangular boxes, suggesting that the learned policy generalizes well to objects with different dimensions and structural properties.
When an object is determined to be immovable, the planner automatically reclassifies it from a movable to a static obstacle and replans an alternative path accordingly.
We employ a learned Whole-Body Controller (WBC) as the low-level controller, and utilize a reinforcement-learned arm-pushing policy as the high-level motion planner.
@article{bi2025interactive,
title={Interactive Navigation for Legged Manipulators with Learned Arm-Pushing Controller},
author={Bi, Zhihai and Chen, Kai and Zheng, Chunxin and Li, Yulin and Li, Haoang and Ma, Jun},
journal={arXiv preprint arXiv:2503.01474},
year={2025},
}