New CNN and hybrid CNN-LSTM models for learning object manipulation of humanoid robots from demonstration

dc.contributor.author Simge Nur Aslan
dc.contributor.author Recep Ozalp
dc.contributor.author Ayşegül Uçar
dc.contributor.author Cüneyt Güzeliş
dc.date.accessioned 2025-10-06T17:49:57Z
dc.date.issued 2022
dc.description.abstract As the environments that human live are complex and uncontrolled the object manipulation with humanoid robots is regarded as one of the most challenging tasks. Learning a manipulation skill from human Demonstration (LfD) is one of the popular methods in the artificial intelligence and robotics community. This paper introduces a deep learning based teleoperation system for humanoid robots that imitate the human operator’s object manipulation behavior. One of the fundamental problems in LfD is to approximate the robot trajectories obtained by means of human demonstrations with high accuracy. The work introduces novel models based on Convolutional Neural Networks (CNNs) CNNs-Long Short-Term Memory (LSTM) models combining the CNN LSTM models and their scaled variants for object manipulation with humanoid robots by using LfD. In the proposed LfD system six models are employed to estimate the shoulder roll position of the humanoid robot. The data are first collected in terms of teleoperation of a real Robotis-Op3 humanoid robot and the models are trained. The trajectory estimation is then carried out by the trained CNNs and CNN-LSTM models on the humanoid robot in an autonomous way. All trajectories relating the joint positions are finally generated by the model outputs. The results relating to the six models are compared to each other and the real ones in terms of the training and validation loss the parameter number and the training and testing time. Extensive experimental results show that the proposed CNN models are well learned the joint positions and especially the hybrid CNN-LSTM models in the proposed teleoperation system exhibit a more accuracy and stable results. © 2022 Elsevier B.V. All rights reserved.
dc.identifier.doi 10.1007/s10586-021-03348-7
dc.identifier.issn 13867857
dc.identifier.issn 1386-7857
dc.identifier.issn 1573-7543
dc.identifier.uri https://www.scopus.com/inward/record.uri?eid=2-s2.0-85132572804&doi=10.1007%2Fs10586-021-03348-7&partnerID=40&md5=01eacd406a2d554226723492d030136a
dc.identifier.uri https://gcris.yasar.edu.tr/handle/123456789/8699
dc.language.iso English
dc.publisher Springer
dc.relation.ispartof Cluster Computing
dc.source Cluster Computing
dc.subject Convolution Neural Networks, Humanoid Robots, Learning From Demonstration, Long Short-term Memory Network, Object Manipulation, Anthropomorphic Robots, Convolutional Neural Networks, Deep Learning, Demonstrations, Learning Systems, Remote Control, Social Robots, Trajectories, Human Demonstrations, Learning Objects, Object Manipulation, Parameter Numbers, Robot Trajectory, Teleoperation Systems, Training And Testing, Trajectory Estimation, Long Short-term Memory
dc.subject Anthropomorphic robots, Convolutional neural networks, Deep learning, Demonstrations, Learning systems, Remote control, Social robots, Trajectories, Human demonstrations, Learning objects, Object manipulation, Parameter numbers, Robot trajectory, Teleoperation systems, Training and testing, Trajectory estimation, Long short-term memory
dc.title New CNN and hybrid CNN-LSTM models for learning object manipulation of humanoid robots from demonstration
dc.type Article
dspace.entity.type Publication
gdc.bip.impulseclass C4
gdc.bip.influenceclass C4
gdc.bip.popularityclass C4
gdc.coar.type text::journal::journal article
gdc.collaboration.industrial false
gdc.description.endpage 1590
gdc.description.startpage 1575
gdc.description.volume 25
gdc.identifier.openalex W3175197495
gdc.index.type Scopus
gdc.oaire.diamondjournal false
gdc.oaire.impulse 11.0
gdc.oaire.influence 3.4723369E-9
gdc.oaire.isgreen true
gdc.oaire.popularity 1.4282889E-8
gdc.oaire.publicfunded false
gdc.oaire.sciencefields 0209 industrial biotechnology
gdc.oaire.sciencefields 02 engineering and technology
gdc.openalex.collaboration National
gdc.openalex.fwci 2.4957
gdc.openalex.normalizedpercentile 0.89
gdc.opencitations.count 15
gdc.plumx.crossrefcites 2
gdc.plumx.mendeley 20
gdc.plumx.scopuscites 17
oaire.citation.endPage 1590
oaire.citation.startPage 1575
person.identifier.scopus-author-id Aslan- Simge Nur (57219265872), Ozalp- Recep (57194274546), Uçar- Ayşegül (7004549716), Güzeliş- Cüneyt (55937768800)
project.funder.name Funding text 1: This work was funded by the Scientific and Technological Research Council of Turkey (TUBITAK) grant numbers 117E589. In addition GTX Titan X Pascal GPU in this research was donated by the NVIDIA Corporation., Funding text 2: This work was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) grant numbers 117E589. In addition GTX Titan X Pascal GPU in this research was donated by the NVIDIA Corporation., Funding text 3: This work was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) grant numbers 117E589. In addition GTX Titan X Pascal GPU in this research was donated by the NVIDIA Corporation.
publicationissue.issueNumber 3
publicationvolume.volumeNumber 25
relation.isOrgUnitOfPublication ac5ddece-c76d-476d-ab30-e4d3029dee37
relation.isOrgUnitOfPublication.latestForDiscovery ac5ddece-c76d-476d-ab30-e4d3029dee37

Files