End-To-End Learning from Demonstation for Object Manipulation of Robotis-Op3 Humanoid Robot

dc.contributor.author Simge Nur Aslan
dc.contributor.author Recep Ozalp
dc.contributor.author Ayşegül Uçar
dc.contributor.author Cüneyt Güzeliş
dc.contributor.author Uear, Aysegul
dc.contributor.author Guzelis, Cunevt
dc.contributor.author Ozalp, Recep
dc.contributor.author Aslan, Simge Nur
dc.contributor.editor M. Ivanovic , T. Yildirim , G. Trajcevski , C. Badica , L. Bellatreche , I. Kotenko , A. Badica , B. Erkmen , M. Savic
dc.date.accessioned 2025-10-06T17:50:56Z
dc.date.issued 2020
dc.description.abstract Humanoid robots are deployed ranging from houses and hotels to healthcare and industry environments to help people. Robots can be easily programed by users to predefined tasks such as walking grasping stand-up and shake-up. However in these days all robots are expected to learn itself from the obtained experience by watching the environment and people in there. In this study it is aimed for Robotis-Op3 humanoid robot to grasp the objects by learning from demonstrations based on vision. A new algorithm is proposed for this purpose. Firstly the robot is manipulated from user commands and the raw images from the camera of Robotis-Op3 are collected. Secondly a semantic segmentation algorithm is applied to detect and recognize the objects. A new model using Convolutional Neural Networks (CNNs) and Long Short-Term Memory Networks (LSTMs) is then proposed to learn the user demonstrations. The results were compared in terms of training time performance and model complexity. Simulation results showed that new models produced a high performance for object manipulation. © 2020 Elsevier B.V. All rights reserved.
dc.description.sponsorship ACKNOWLEDGMENT This work was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) grant numbers 117E589. In addition, GTX Titan X Pascal GPU in this research was donated by the NVIDIA Corporation.
dc.description.sponsorship TUBITAK, (117E589); Türkiye Bilimsel ve Teknolojik Araştirma Kurumu, TÜBITAK
dc.identifier.doi 10.1109/INISTA49547.2020.9194630
dc.identifier.isbn 9781728167992
dc.identifier.scopus 2-s2.0-85091972373
dc.identifier.uri https://www.scopus.com/inward/record.uri?eid=2-s2.0-85091972373&doi=10.1109%2FINISTA49547.2020.9194630&partnerID=40&md5=4a6875ab88a03e321e45fca288ee4de8
dc.identifier.uri https://gcris.yasar.edu.tr/handle/123456789/9170
dc.identifier.uri https://doi.org/10.1109/INISTA49547.2020.9194630
dc.language.iso English
dc.publisher Institute of Electrical and Electronics Engineers Inc.
dc.relation.ispartof 2020 International Conference on INnovations in Intelligent SysTems and Applications INISTA 2020
dc.rights info:eu-repo/semantics/closedAccess
dc.subject Convolutional Neural Networks, Humanoid Robots, Long Short-term Memory Networks, Object Grasping, Semantic Segmentation, Convolutional Neural Networks, Image Segmentation, Intelligent Systems, Object Detection, Semantics, Humanoid Robot, Industry Environment, Learning From Demonstration, Model Complexity, Object Manipulation, Semantic Segmentation, Short Term Memory, User Commands, Anthropomorphic Robots
dc.subject Convolutional neural networks, Image segmentation, Intelligent systems, Object detection, Semantics, Humanoid robot, Industry environment, Learning from demonstration, Model complexity, Object manipulation, Semantic segmentation, Short term memory, User commands, Anthropomorphic robots
dc.subject Convolutional Neural Networks
dc.subject Humanoid Robots
dc.subject Semantic Segmentation
dc.subject Long Short-Term Memory Networks
dc.subject Object Grasping
dc.title End-To-End Learning from Demonstation for Object Manipulation of Robotis-Op3 Humanoid Robot
dc.type Conference Object
dspace.entity.type Publication
gdc.author.scopusid 57194274546
gdc.author.scopusid 57219265872
gdc.author.scopusid 55937768800
gdc.author.scopusid 7004549716
gdc.bip.impulseclass C5
gdc.bip.influenceclass C5
gdc.bip.popularityclass C5
gdc.coar.type text::conference output
gdc.collaboration.industrial false
gdc.description.department
gdc.description.departmenttemp [Aslan S.N.] Firat University, Department of Mechatronics Engineering, Elazig, Turkey; [Ozalp R.] Firat University, Department of Mechatronics Engineering, Elazig, Turkey; [Uear A.] Firat University, Department of Mechatronics Engineering, Elazig, Turkey; [Guzelis C.] Yaşar University, Department of Electrical and Electronics Engineering, Izmir, Turkey
gdc.description.endpage 6
gdc.description.publicationcategory Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı
gdc.description.startpage 1
gdc.identifier.openalex W3086839181
gdc.index.type Scopus
gdc.oaire.diamondjournal false
gdc.oaire.impulse 1.0
gdc.oaire.influence 2.4458473E-9
gdc.oaire.isgreen false
gdc.oaire.popularity 2.7786402E-9
gdc.oaire.publicfunded false
gdc.oaire.sciencefields 0202 electrical engineering, electronic engineering, information engineering
gdc.oaire.sciencefields 02 engineering and technology
gdc.openalex.collaboration National
gdc.openalex.fwci 0.2931
gdc.openalex.normalizedpercentile 0.57
gdc.opencitations.count 2
gdc.plumx.crossrefcites 1
gdc.plumx.mendeley 15
gdc.plumx.scopuscites 6
gdc.scopus.citedcount 6
gdc.virtual.author Güzeliş, Cüneyt
person.identifier.scopus-author-id Aslan- Simge Nur (57219265872), Ozalp- Recep (57194274546), Uçar- Ayşegül (7004549716), Güzeliş- Cüneyt (55937768800)
project.funder.name ACKNOWLEDGMENT This work was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) grant numbers 117E589. In addition GTX Titan X Pascal GPU in this research was donated by the NVIDIA Corporation.
relation.isAuthorOfPublication 10f564e3-6c1c-4354-9ce3-b5ac01e39680
relation.isAuthorOfPublication.latestForDiscovery 10f564e3-6c1c-4354-9ce3-b5ac01e39680
relation.isOrgUnitOfPublication ac5ddece-c76d-476d-ab30-e4d3029dee37
relation.isOrgUnitOfPublication.latestForDiscovery ac5ddece-c76d-476d-ab30-e4d3029dee37

Files