Yes, you read that correctly… Researchers in Switzerland have designed a robot companion that can play badminton with humans. The robot so adept it can even maintain rallies up to 10 consecutive shots.
And an even bigger surprise – the robot learns from its errors.
They say their robot demonstrates “the feasibility of using legged mobile manipulators in complex and dynamic sports scenarios.
“Beyond badminton, the method offers a template for deploying legged manipulators in other dynamic tasks where accurate sensing and rapid, whole-body responses are both critical,” they write in the study published in the journal Science Robotics.
Humans coordinate a lot of complex skills to play sports like badminton. Agile footwork allows athletes to effectively cover the extensive court area, while precise hand-eye coordination helps them anticipate and correctly hit the shuttlecock back towards an opponent.
This complex interplay between perception, locomotion, and manipulation makes developing robotic systems capable of playing badminton and other sports a formidable challenge.
Researchers from the Robotic Systems Lab at ETH Zurich tackled the challenge by equipping their 4-legged robot, ANYmal, with a stereo camera for vision-based perception and a dynamic arm to swing a badminton racket.
They used simulations to train a “reinforcement learning-based control framework” which used the camera’s field of view to track and predict the shuttlecock’s trajectory. It then coordinated motion between the lower 4 limbs to move the robot into the correct position to return the shot.
A “perception noise model” then used the camera data to determine the error between the reinforcement learning (RL) controller’s predicted and real-world outcomes.
“This model captured the effect of robot motion on perception quality by accounting for both single-frame object tracking errors and final interception predictions, which reduced the perception sim-to-real gap and allowed the robot to learn perception-driven behaviours.”
Credit: 2025 Yuntao Ma, Robotic Systems Lab, ETH Zurich
According to the study, the robot was able to develop sophisticated human-like badminton behaviours, including “follow-through after hitting the shuttlecock” and “active perception to enhance shuttle state estimation.”
For example, the robot could pitch up to keep the shuttlecock in the camera’s field of view until it needed to pitch down again to swing the racket.
Incredibly, the controller system “also demonstrated the emergent behaviour of moving back near the centre of the court after each hit, similar to how human players prepare for the next hit.”
“The reinforcement learning algorithm balances the trade-off between agile control and accurate shuttlecock perception by optimising the policy’s overall ability to hit the shuttlecock in simulation,” the authors write.
“Extensive experimental results in a variety of environments validate the robot’s capability to predict shuttlecock trajectories, navigate the service area effectively, and execute precise strikes against human players.”
The team has some ideas about how to enhance the robot’s athletic capabilities even further.
“Given that human players often predict shuttlecock trajectories by observing their opponents’ movements, human pose estimation could also be a valuable modality for improving … performance,” they suggest.
“A high-level badminton command policy that adapts swing commands on the basis of the opponent’s body movements could improve the robot’s ability to maintain rallies and increase its chances of winning.”