News Arena

Home

Nation

States

International

Politics

Opinion

Economy

Sports

Entertainment

Trending:

Home
/

mit-introduces-large-scale-learning-for-robotic-skills

International

MIT introduces large scale learning for robotic skills

MIT has recently unveiled a groundbreaking method for training robots, inspired by the way large language models (LLMs) like GPT-4 learn from vast amounts of data.

News Arena Network - Massachusetts - UPDATED: November 10, 2024, 05:57 PM - 2 min read

MIT unveils new method for teaching robots using data.


MIT has recently unveiled a groundbreaking method for training robots, inspired by the way large language models (LLMs) like GPT-4 learn from vast amounts of data.

Rather than using traditional, narrowly focused datasets to teach robots new tasks, MIT’s new approach harnesses a broader, more dynamic model that mimics the vast and varied data used to train language models.


Traditional methods of training robots often rely on imitation learning, where a robot learns by observing and copying an individual performing a task. However, this method can break down when small changes are introduced.

Variables like lighting, environmental settings, or the presence of new obstacles can throw the robot off, as it lacks enough data to adapt to these challenges. This limitation has often hindered robots from becoming fully versatile and capable of handling a wide range of real-world scenarios.


In response to this challenge, the researchers at MIT turned to the success of large language models, which use massive datasets to train systems capable of handling a variety of tasks.

Drawing inspiration from models like GPT-4, the team sought to bring a similar approach to robotics by using a large-scale, data-driven method. As Lirui Wang, the lead author of the research, explains, the primary difficulty in robotics is the diversity and complexity of the data.

Unlike language models that primarily deal with sentences, robotic data comes from various sources, such as visual inputs, environmental sensors, and physical actions.


To overcome this, MIT introduced a new architecture called "heterogeneous pretrained transformers" (HPT). This model gathers and integrates data from a wide range of sensors and environments.

A transformer, a type of neural network architecture, is then used to compile this data into usable training models. The key here is scale— the larger and more diverse the data input, the better the performance of the model.


The goal of this approach is to create a more adaptable and scalable system. Users would simply need to input the robot’s design, configuration, and the task it is meant to perform.

With enough data and the right model, the robot could learn to execute its tasks effectively, adapting to various changes in its environment with minimal retraining.

 

David Held, a professor at Carnegie Mellon University and part of the research team, shared the long-term vision behind the project. “Our dream is to have a universal robot brain that you could download and use for your robot without any training at all,” he said.

While the research is still in its early stages, the hope is that by scaling up this approach, the team will achieve a breakthrough in robotic training policies, much like the success seen with large language models in the field of artificial intelligence.


The research has received backing from the Toyota Research Institute (TRI), which has also been a key player in robot training advancements. TRI made headlines last year at TechCrunch Disrupt when it revealed a method for training robots overnight.

 

More recently, TRI has formed a strategic partnership with Boston Dynamics to combine its robot learning research with the robotics giant’s cutting-edge hardware, paving the way for even more sophisticated robotic systems in the future.

TOP CATEGORIES

  • Nation

QUICK LINKS

About us Rss FeedSitemapPrivacy PolicyTerms & Condition
logo

2025 News Arena India Pvt Ltd | All rights reserved | The Ideaz Factory