A collaboration between NVIDIA and academic researchers is prepping robots for surgery.
ORBIT-Surgical — developed by researchers from the University of Toronto, UC Berkeley, ETH Zurich, Georgia Tech and NVIDIA — is a simulation framework to train robots that could augment the skills of surgical teams while reducing surgeons’ cognitive load.
It supports more than a dozen maneuvers inspired by the training curriculum for laparoscopic procedures, aka minimally invasive surgery, such as grasping small objects like needles, passing them from one arm to another and placing them with high precision.
The physics-based framework was built using NVIDIA Isaac Sim, a robotics simulation platform for designing, training and testing AI-based robots. The researchers trained reinforcement learning and imitation learning algorithms on NVIDIA GPUs and used NVIDIA Omniverse, a platform for developing and deploying advanced 3D applications and pipelines based on Universal Scene Description (OpenUSD), to enable photorealistic rendering.
Using the community-supported da Vinci Research Kit, provided by the Intuitive Foundation, a nonprofit supported by robotic surgery leader Intuitive Surgical, the ORBIT-Surgical research team demonstrated how training a digital twin in simulation transfers to a physical robot in a lab environment in the video below.
ORBIT-Surgical will be presented Thursday at ICRA, the IEEE International Conference on Robotics and Automation, taking place this week in Yokohama, Japan. The open-source code package is now available on GitHub.
A Stitch in AI Saves Nine
ORBIT-Surgical is based on Isaac Lab, a modular framework for robot learning built on Isaac Sim. Isaac Lab includes support for various libraries for reinforcement learning and imitation learning, where AI agents are trained to mimic ground-truth expert examples.
The surgical framework enables developers to train robots like the da Vinci Research Kit robot, or dVRK, to manipulate both rigid and soft objects using reinforcement learning and imitation learning frameworks running on NVIDIA RTX GPUs.
ORBIT-Surgical introduces more than a dozen benchmark tasks for surgical training, including one-handed tasks such as picking up a piece of gauze, inserting a shunt into a blood vessel or lifting a suture needle to a specific position. It also includes two-handed tasks, like handing a needle from one arm to another, passing a threaded needle through a ring pole and reaching two arms to specific positions while avoiding obstacles.
One of ORBIT-Surgical’s benchmark tests is inserting a shunt — shown on left with a real-world robot and on right in simulation.
By developing a surgical simulator that takes advantage of GPU acceleration and parallelization, the team is able to boost robot learning speed by an order of magnitude compared to existing surgical frameworks. They found that the robot digital twin could be trained to complete tasks like inserting a shunt and lifting a suture needle in under two hours on a single NVIDIA RTX GPU.
With the visual realism enabled by rendering in Omniverse, ORBIT-Surgical also allows researchers to generate high-fidelity synthetic data, which could help train AI models for perception tasks such as segmenting surgical tools in real-world videos captured in the operating room.
A proof of concept by the team showed that combining simulation and real data significantly improved the accuracy of an AI model to segment surgical needles from images — helping reduce the need for large, expensive real-world datasets for training such models.
Read the paper behind ORBIT-Surgical, and learn more about NVIDIA-authored papers at ICRA.