Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Ucar, Aysegul"

Filter results by typing the first few letters
Now showing 1 - 6 of 6
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 16
    Citation - Scopus: 21
    Advancements in Deep Reinforcement Learning and Inverse Reinforcement Learning for Robotic Manipulation: Toward Trustworthy Interpretable and Explainable Artificial Intelligence
    (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2024) Recep Ozalp; Aysegul Ucar; Cuneyt Guzelis; Guzelis, Cuneyt; Ucar, Aysegul; Ozalp, Recep
    This article presents a literature review of the past five years of studies using Deep Reinforcement Learning (DRL) and Inverse Reinforcement Learning (IRL) in robotic manipulation tasks. The reviewed articles are examined in various categories including DRL and IRL for perception assembly manipulation with uncertain rewards multitasking transfer learning multimodal and Human-Robot Interaction (HRI). The articles are summarized in terms of the main contributions methods challenges and highlights of the latest and relevant studies using DRL and IRL for robotic manipulation. Additionally summary tables regarding the problem and solution are presented. The literature review then focuses on the concepts of trustworthy AI interpretable AI and explainable AI (XAI) in the context of robotic manipulation. Moreover this review provides a resource for future research on DRL/IRL in trustworthy robotic manipulation.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 6
    Development of Deep Learning Algorithm for Humanoid Robots to Walk to the Target Using Semantic Segmentation and Deep Q Network
    (Institute of Electrical and Electronics Engineers Inc., 2020) Guzelis, Cuneyt; Ucar, Aysegul; Aslan, Simge Nur
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 10
    Fast Object Recognition for Humanoid Robots by Using Deep Learning Models with Small Structure
    (Institute of Electrical and Electronics Engineers Inc., 2020) Simge Nur Aslan; Ayşegül Uçar; Cüneyt Güzeliş; Guzelis, Cuneyt; Ucar, Aysegul; Aslan, Simge Nur; M. Ivanovic , T. Yildirim , G. Trajcevski , C. Badica , L. Bellatreche , I. Kotenko , A. Badica , B. Erkmen , M. Savic
    In these days the humanoid robots are expected to help people in healthcare house and hotels industry military and the other security environments by performing specific tasks or to replace with people in dangerous scenarios. For this purpose the humanoid robots should be able to recognize objects and then to do the desired tasks. In this study it is aimed for Robotis-Op3 humanoid robot to recognize the different shaped objects with deep learning methods. First of all new models with small structure of Convolutional Neural Networks (CNNs) were proposed. Then the popular deep neural networks models such as VGG16 and Residual Network (ResNet) that is good at object recognition were used for comparing at recognizing the objects. The results were compared in terms of training time performance and model complexity. Simulation results show that new models with small layer structure produced higher performance than complex models. © 2020 Elsevier B.V. All rights reserved.
  • Loading...
    Thumbnail Image
    Book Part
    Citation - WoS: 1
    Citation - Scopus: 3
    Learning to move an object by the humanoid robots by using deep reinforcement learning
    (IOS Press, 2021) Simge Nur Aslan; Burak Taşçi; Ayşegül Uçar; Cüneyt Güzeliş; Tasci, Burak; Guzelis, Cuneyt; Ucar, Aysegul; Aslan, Simge Nur
    This paper proposes an algorithm for learning to move the desired object by humanoid robots. In this algorithm the semantic segmentation algorithm and Deep Reinforcement Learning (DRL) algorithms are combined. The semantic segmentation algorithm is used to detect and recognize the object be moved. DRL algorithms are used at the walking and grasping steps. Deep Q Network (DQN) is used to walk towards the target object by means of the previously defined actions at the gate manager and the different head positions of the robot. Deep Deterministic Policy Gradient (DDPG) network is used for grasping by means of the continuous actions. The previously defined commands are finally assigned for the robot to stand up turn left side and move forward together with the object. In the experimental setup the Robotis-Op3 humanoid robot is used. The obtained results show that the proposed algorithm has successfully worked. © 2021 Elsevier B.V. All rights reserved.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 117
    Citation - Scopus: 160
    Object recognition and detection with deep learning for autonomous driving applications
    (SAGE PUBLICATIONS LTD, 2017) Aysegul Ucar; Yakup Demir; Cuneyt Guzelis; Guzelis, Cuneyt; Ucar, Aysegul; Demir, Yakup
    Autonomous driving requires reliable and accurate detection and recognition of surrounding objects in real drivable environments. Although different object detection algorithms have been proposed not all are robust enough to detect and recognize occluded or truncated objects. In this paper we propose a novel hybrid Local Multiple system (LM-CNN-SVM) based on Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) due to their powerful feature extraction capability and robust classification property respectively. In the proposed system we divide first the whole image into local regions and employ multiple CNNs to learn local object features. Secondly we select discriminative features by using Principal Component Analysis. We then import into multiple SVMs applying both empirical and structural risk minimization instead of using a direct CNN to increase the generalization ability of the classifier system. Finally we fuse SVM outputs. In addition we use the pre-trained AlexNet and a new CNN architecture. We carry out object recognition and pedestrian detection experiments on the Caltech-101 and Caltech Pedestrian datasets. Comparisons to the best state-of-the-art methods show that the proposed system achieved better results.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 4
    Semantic Segmentation for Object Detection and Grasping with Humanoid Robots
    (Institute of Electrical and Electronics Engineers Inc., 2020) Guzelis, Cuneyt; Ucar, Aysegul; Aslan, Simge Nur
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback