Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Demir, Yakup"

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 16
    An Implementation of Vision Based Deep Reinforcement Learning for Humanoid Robot Locomotion
    (Institute of Electrical and Electronics Engineers Inc., 2019) Recep Ozalp; Çaǧri Kaymak; Özal Yildirim; Ayşegül Uçar; Yakup Demir; Cüneyt Güzeliş; Ozaln, Recen; Yildirum, Ozal; Guzelis, Cuneyt; Kaymak, Cagri; Ucar, Ayscgul; Demir, Yakup; P. Koprinkova-Hristova , T. Yildirim , V. Piuri , L. Iliadis , D. Camacho
    Deep reinforcement learning (DRL) exhibits a promising approach for controlling humanoid robot locomotion. However only values relating sensors such as IMU gyroscope and GPS are not sufficient robots to learn their locomotion skills. In this article we aim to show the success of vision based DRL. We propose a new vision based deep reinforcement learning algorithm for the locomotion of the Robotis-op2 humanoid robot for the first time. In experimental setup we construct the locomotion of humanoid robot in a specific environment in the Webots software. We use Double Dueling Q Networks (D3QN) and Deep Q Networks (DQN) that are a kind of reinforcement learning algorithm. We present the performance of vision based DRL algorithm on a locomotion experiment. The experimental results show that D3QN is better than DQN in that stable locomotion and fast training and the vision based DRL algorithms will be successfully able to use at the other complex environments and applications. © 2020 Elsevier B.V. All rights reserved.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 117
    Citation - Scopus: 160
    Object recognition and detection with deep learning for autonomous driving applications
    (SAGE PUBLICATIONS LTD, 2017) Aysegul Ucar; Yakup Demir; Cuneyt Guzelis; Guzelis, Cuneyt; Ucar, Aysegul; Demir, Yakup
    Autonomous driving requires reliable and accurate detection and recognition of surrounding objects in real drivable environments. Although different object detection algorithms have been proposed not all are robust enough to detect and recognize occluded or truncated objects. In this paper we propose a novel hybrid Local Multiple system (LM-CNN-SVM) based on Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) due to their powerful feature extraction capability and robust classification property respectively. In the proposed system we divide first the whole image into local regions and employ multiple CNNs to learn local object features. Secondly we select discriminative features by using Principal Component Analysis. We then import into multiple SVMs applying both empirical and structural risk minimization instead of using a direct CNN to increase the generalization ability of the classifier system. Finally we fuse SVM outputs. In addition we use the pre-trained AlexNet and a new CNN architecture. We carry out object recognition and pedestrian detection experiments on the Caltech-101 and Caltech Pedestrian datasets. Comparisons to the best state-of-the-art methods show that the proposed system achieved better results.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback