Visually Guided Model Predictive Robot Control via 6D Object Pose Localization and Tracking

Visually Guided Model Predictive Robot Control via 6D Object Pose Localization and Tracking

Explore our partners’, LAAS-CNRS and Czech Institute of Informatics, Robotics and Cybernetics,  publication “Visually Guided Model Predictive Robot C1ontrol via 6D Object Pose Localization and Tracking”.

This study focuses on enhancing the capabilities of robots to manipulate dynamically moving objects using camera-based systems. The goal is to address scenarios where robots need to interact with objects that are in motion, such as grasping items on a conveyor belt or collaborating with humans in dynamic environments. We propose a novel visual perception module that integrates learning-based 6D object pose localization with a high-rate model-based 6D pose tracker. This enables rapid and accurate estimation of the 6D pose of moving objects from video input, crucial for ensuring smooth and stable robot control. Additionally, we introduce a visually guided robot arm controller that combines the visual perception module with a torque-based model predictive control algorithm. This allows for asynchronous integration of visual and proprioceptive signals, ensuring robust and precise control of robot arm movements in dynamic environments. Experimental validation demonstrates the effectiveness of our approach, particularly in scenarios involving real-time interaction with dynamically moving objects using a 7 DoF robot arm.

Read the publication here.

Find all AGIMUS publications here.


Funded by the European Union under GA no.101070165. Views and opinions expressed are, however, those of the author only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.