Up next

Discover How Robots Use Generative AI to Reason and Act with ReMEmbR

1 Views· 25/09/24
GTube
GTube
Subscribers
0
In

This video features ReMEmbR, a project that combines LLMs, VLMs, and retrieval-augmented generation (RAG) to enable robots to reason and take actions over what they see during a long-horizon deployment to answer user queries and produce navigation goals.

The demo uses NVIDIA Isaac ROS robotics framework running on a Nova Carter robot developed by NVIDIA and Segway.

Watch as the robot is teleoperated around a large building while recording its pose and observations (as captioned by the VILA visual language model running on board the NVIDIA Jetson AGX Orin).

The observations are inserted into a database of memories that the ReMEmBr LLM-based agent reasons over to answer queries like, “Can you take me somewhere to get a snack?” In response, the robot reasons and comes up with a goal pose to navigate to.

🤖 See NVIDIA Technical blog for more on ReMEmbR: https://developer.nvidia.com/b....log/enabling-robots-

📝 Learn more about Nova Carter: https://robotics.segway.com/nova-carter/
📝 Learn more about NVIDIA Isaac ROS: https://nvidia-isaac-ros.github.io/

➡️ Join the NVIDIA Developer Program: https://nvda.ws/3OhiXfl
➡️ Read and subscribe to the NVIDIA Technical Blog: https://nvda.ws/3XHae9F


Edge Computing, Generative AI, Robotics, NVIDIA Jetson, TensorRT, LLMs, NVIDIA

Show more

 0 Comments sort   Sort By