Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Ozalp, Recep"

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 16
    Citation - Scopus: 21
    Advancements in Deep Reinforcement Learning and Inverse Reinforcement Learning for Robotic Manipulation: Toward Trustworthy Interpretable and Explainable Artificial Intelligence
    (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2024) Recep Ozalp; Aysegul Ucar; Cuneyt Guzelis; Guzelis, Cuneyt; Ucar, Aysegul; Ozalp, Recep
    This article presents a literature review of the past five years of studies using Deep Reinforcement Learning (DRL) and Inverse Reinforcement Learning (IRL) in robotic manipulation tasks. The reviewed articles are examined in various categories including DRL and IRL for perception assembly manipulation with uncertain rewards multitasking transfer learning multimodal and Human-Robot Interaction (HRI). The articles are summarized in terms of the main contributions methods challenges and highlights of the latest and relevant studies using DRL and IRL for robotic manipulation. Additionally summary tables regarding the problem and solution are presented. The literature review then focuses on the concepts of trustworthy AI interpretable AI and explainable AI (XAI) in the context of robotic manipulation. Moreover this review provides a resource for future research on DRL/IRL in trustworthy robotic manipulation.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 6
    End-To-End Learning from Demonstation for Object Manipulation of Robotis-Op3 Humanoid Robot
    (Institute of Electrical and Electronics Engineers Inc., 2020) Simge Nur Aslan; Recep Ozalp; Ayşegül Uçar; Cüneyt Güzeliş; Uear, Aysegul; Guzelis, Cunevt; Ozalp, Recep; Aslan, Simge Nur; M. Ivanovic , T. Yildirim , G. Trajcevski , C. Badica , L. Bellatreche , I. Kotenko , A. Badica , B. Erkmen , M. Savic
    Humanoid robots are deployed ranging from houses and hotels to healthcare and industry environments to help people. Robots can be easily programed by users to predefined tasks such as walking grasping stand-up and shake-up. However in these days all robots are expected to learn itself from the obtained experience by watching the environment and people in there. In this study it is aimed for Robotis-Op3 humanoid robot to grasp the objects by learning from demonstrations based on vision. A new algorithm is proposed for this purpose. Firstly the robot is manipulated from user commands and the raw images from the camera of Robotis-Op3 are collected. Secondly a semantic segmentation algorithm is applied to detect and recognize the objects. A new model using Convolutional Neural Networks (CNNs) and Long Short-Term Memory Networks (LSTMs) is then proposed to learn the user demonstrations. The results were compared in terms of training time performance and model complexity. Simulation results showed that new models produced a high performance for object manipulation. © 2020 Elsevier B.V. All rights reserved.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback