•  Retrait gratuit dans votre magasin Club
  •  7.000.000 titres dans notre catalogue
  •  Payer en toute sécurité
  •  Toujours un magasin près de chez vous     
  •  Retrait gratuit dans votre magasin Club
  •  7.000.0000 titres dans notre catalogue
  •  Payer en toute sécurité
  •  Toujours un magasin près de chez vous

Learning-From-Observation 2.0

Automatic Acquisition of Robot Behavior from Human Demonstration

Katsushi Ikeuchi, Naoki Wake, Jun Takamatsu, Kazuhiro Sasabuchi
Livre relié | Anglais | Synthesis Lectures on Computer Vision
68,95 €
+ 137 points
Pré-commander, date de disponibilité inconnue
Passer une commande en un clic
Payer en toute sécurité
Livraison en Belgique: 3,99 €
Livraison en magasin gratuite

Description

This book presents recent breakthroughs in the field of Learning-from-Observation (LfO) resulting from advancement in large language models (LLM) and reinforcement learning (RL) and positions it in the context of historical developments in the area. LfO involves observing human behaviors and generating robot actions that mimic these behaviors. While LfO may appear similar, on the surface, to Imitation Learning (IL) in the machine learning community and Programing-by-Demonstration (PbD) in the robotics community, a significant difference lies in the fact that these methods directly imitate human hand movements, whereas LfO encodes human behaviors into the abstract representations and then maps these representations onto the currently available hardware (individual body) of the robot, thus indirectly mimicking them. This indirect imitation allows for absorbing changes in the surrounding environment and differences in robot hardware. Additionally, by passing through this abstract representation, filtering can occur, distinguishing between important and less important aspects of human behavior, enabling imitation with fewer demonstrations and less demanding demonstrations. The authors have been researching the LfO paradigm for the past decade or so. Previously, the focus was primarily on designing necessary and sufficient task representations to define specific task domains such as assembly of machine parts, knot-tying, and human dance movements. Recent advancements in Generative Pre-trained Transformers (GPT) and RL have led to groundbreaking developments in methods to obtain and map these abstract representations. By utilizing GPT, the authors can automatically generate abstract representations from videos, and by employing RL-trained agent libraries, implementing robot actions becomes more feasible.

Spécifications

Parties prenantes

Auteur(s) :
Editeur:

Contenu

Nombre de pages :
140
Langue:
Anglais
Collection :

Caractéristiques

EAN:
9783032034441
Date de parution :
04-10-25
Format:
Livre relié
Format numérique:
Genaaid
Dimensions :
168 mm x 240 mm

Les avis

Nous publions uniquement les avis qui respectent les conditions requises. Consultez nos conditions pour les avis.