October 30, 2024, Cambridge, MA – Scientists at the Computer Science and Artificial Intelligence Laboratory (CSAIL) of MIT and Meta have developed a multiskill robot training method inspired by large language models (LLMs) such as GPT-4.
This approach demonstrated through a robotic arm feeding task, introduces a groundbreaking multiskill training model using enhancing adaptability, varied data sources, and training efficiency in robotics.
The new concept developed by MIT, focusing on Heterogeneous Pretrained Transformers (HPT), reuses different types of inputs such as sensor data and visual information, and encodes them into a single model.
This is based on generative imagery and AI leading to a 20% better outcome than using the conventional approaches.
Dr. Nivash Jeevanandam, MIT CSAIL researcher, stated,
“Our unified approach significantly reduces the time and resources required for training.”
The training set is made up of a dataset that contains more than 200,000 robotic trajectories across human demonstrations and simulations, aiming to ease the adaptation of robots in performing complicated tasks.
Thanks to the Amazon Greater Boston Tech Initiative and the Toyota Research Institute, this approach toward AI development in robotics is revolutionary and strives for cost-effective, efficient, and more adaptable AI learning for robots. Researchers have now started looking into other uses to diversify the performance of robots using AI.
Latest Stories:
Decagon Achieves 91% Customer Support Automation Using OpenAI Models
AI Infrastructure Growth Threatens Global E-Waste Crisis, Research Warns
Central Wisconsin Launches Revolutionary AI Center for Business and Workforce Training