Humans are really good at moving in space. We do motor planning and especially error correction much better than existing robot control systems. 

Could a transformer model trained on a large library of annotated human movement data serve as a controller for a humanoid robot or robotic limb? My impression is that a movement model might not be useful for directly controlling servos, because human and robot bodies are so different. But perhaps it could improve the motor planning and error correction layer?

The data would presumably take the form of a 3D wireframe model of the limb/body and its trajectory through space, the goal of the movement ("pour water from this cup to that cup") and some rating of success or failure.

I don't have experience in either LLMs/transformer models or robotics so this question might miss some obvious points, but I couldn't get the idea out of my head!

New to LessWrong?

New Answer
New Comment

3 Answers sorted by

gwern

Apr 24, 2023

40

This seems to be akin to asking, 'does RL scale like every other area of DL so far?' and the answer is more or less yes: https://www.reddit.com/r/mlscaling/search?q=flair%3ARL&restrict_sr=on&include_over_18=on https://gwern.net/doc/reinforcement-learning/scaling/index

A_Posthuman

Apr 24, 2023

30

Yes, there appears to already be work in this area. Here is a recent example I ran across on twitter showing videos of 2 relatively low cost robot arms learning various very fine manipulation tasks apparently after just 15 minutes or so of demonstrations:

 

Introducing ACT: Action Chunking with Transformers

https://twitter.com/tonyzzhao/status/1640395685597159425

 

related website:

Learning Fine-Grained Bimanual Manipulation
with Low-Cost Hardware

https://tonyzhaozh.github.io/aloha/

On the other hand, data (and data efficiency) is still a problem. This is not the sort of thing that gets us beyond a sim-to-real training paradigm, so it might be used but it's not a big deal.