Continual Robot Learning from Humans
Workshop RSS 2025 - June 21 (8:50am - 12:30pm in Los Angeles, US)
In person locatoin: OHE 132, University of Southern California
Zoom link: currently unavailable, but we will upload the recording/slides later
Time (PST) | |
---|---|
8:50 am - 9:00 am | Organizers Introductory Remarks |
9:00 am - 9:30 am | Yonatan Bisk Humans Know Things, Humans Get in the Way Abstract
As language models become increasingly pervasive, how do they handle embodied contexts? How well are they calibrated? What's next?!
|
9:30 am - 10:00 am | Mahi Shafiullah Generalization as a Matter of Perspective Abstract
While we are seeing increasingly impressive robot demos and highly capable embodiments come out, we aren't quite at the point where we can have a robot butler in every home.
Why is that? In this talk, I will argue that to make advances on this problem we will have to distinguish between interpolation and extrapolation problems in learning.
Then, I will talk about how we can cast the problem of robot learning from humans as an interpolation problem with a change of perspective from humans to robots - quite literally - by using handheld tools and an iPhone.
I will talk about different approaches that unlock solving problems in novel environments right out of the box following this principle of solving robot problems from the robot perspective.
Finally, I will talk about some future challenges that we will have to address, such as dexterity and long horizon, and some solutions that we have worked on.
|
10:00 am - 10:30 am | Homanga Bharadhwaj Observational Learning for Manipulation via Visual Imitation of Humans Abstract
Robots learning to interact with the world by directly observing humans go about their daily life, has been a dream in AI for decades.
In this talk, I will discuss recent advances in learning such robotic manipulation zero-shot primarily from large-scale web videos, and also enabling robots to follow in-context human demonstrations one-shot in novel scenarios.
Finally, I'll conclude with a reality check about how far we are from the dream, and discuss open challenges.
|
10:30 am - 11:00 am | Break and Poster Session |
11:00 am - 11:30 am | Yilun Du Continual Robot Learning with Compositional Generative Models Abstract
Robot learning policies are typically large, monolithic models that are difficult to adapt.
In this talk, I'll illustrate how by building policies in in a compositional manner, we can construct systems that can quickly and continually learn from people.
I'll first illustrate how we can use inference-time composition to flexibly adapt policies to human constraints.
I'll next illustrate how we can use compositional models with language models to quickly learn to set up scenes together.
Finally, I'll further illustrate how compositional generative models enable us to construct an inverse generative modeling procedure which allows us to flexibly learn new tasks.
|
11:30 am - 12:00 pm | Chelsea Finn You Should Talk to You Robot Abstract
To be determined.
|
12:00 pm - 12:30 pm | Maja Mataric HRI to AGI: How Embodied Interaction and Learning Shape Intelligence and Drive the Future of AGI Abstract
To be determined.
|
Event | Date |
---|---|
Submission Deadline | |
Notification | June 7th, 2025 (23:59 AoE) |
Camera Ready | June 14th, 2025 (23:59 AoE) |
Workshop | June 21st (9:00am - 12:30pm), 2025 |
Submissions should follow the RSS template, which is available either in LaTeX or Word format. The recommended paper length is 4 pages excluding references. However, any paper that is between 2 and 6 pages, excluding references, will be reviewed for inclusion in the workshop program. All papers must be submitted as anonymized PDFs for double-blind review via OpenReview.
Accepted papers will be presented during the workshop (either in-person or remotely) and featured in a poster session. The proceedings will be treated as non-archival, allowing future conference or journal submissions.
Relevant topics include, but are not limited to: