Browsing by Author "Demir, Yakup"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Conference Object Citation - Scopus: 16An Implementation of Vision Based Deep Reinforcement Learning for Humanoid Robot Locomotion(Institute of Electrical and Electronics Engineers Inc., 2019) Recep Ozalp; Çaǧri Kaymak; Özal Yildirim; Ayşegül Uçar; Yakup Demir; Cüneyt Güzeliş; Ozaln, Recen; Yildirum, Ozal; Guzelis, Cuneyt; Kaymak, Cagri; Ucar, Ayscgul; Demir, Yakup; P. Koprinkova-Hristova , T. Yildirim , V. Piuri , L. Iliadis , D. CamachoDeep reinforcement learning (DRL) exhibits a promising approach for controlling humanoid robot locomotion. However only values relating sensors such as IMU gyroscope and GPS are not sufficient robots to learn their locomotion skills. In this article we aim to show the success of vision based DRL. We propose a new vision based deep reinforcement learning algorithm for the locomotion of the Robotis-op2 humanoid robot for the first time. In experimental setup we construct the locomotion of humanoid robot in a specific environment in the Webots software. We use Double Dueling Q Networks (D3QN) and Deep Q Networks (DQN) that are a kind of reinforcement learning algorithm. We present the performance of vision based DRL algorithm on a locomotion experiment. The experimental results show that D3QN is better than DQN in that stable locomotion and fast training and the vision based DRL algorithms will be successfully able to use at the other complex environments and applications. © 2020 Elsevier B.V. All rights reserved.Article Citation - WoS: 117Citation - Scopus: 160Object recognition and detection with deep learning for autonomous driving applications(SAGE PUBLICATIONS LTD, 2017) Aysegul Ucar; Yakup Demir; Cuneyt Guzelis; Guzelis, Cuneyt; Ucar, Aysegul; Demir, YakupAutonomous driving requires reliable and accurate detection and recognition of surrounding objects in real drivable environments. Although different object detection algorithms have been proposed not all are robust enough to detect and recognize occluded or truncated objects. In this paper we propose a novel hybrid Local Multiple system (LM-CNN-SVM) based on Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) due to their powerful feature extraction capability and robust classification property respectively. In the proposed system we divide first the whole image into local regions and employ multiple CNNs to learn local object features. Secondly we select discriminative features by using Principal Component Analysis. We then import into multiple SVMs applying both empirical and structural risk minimization instead of using a direct CNN to increase the generalization ability of the classifier system. Finally we fuse SVM outputs. In addition we use the pre-trained AlexNet and a new CNN architecture. We carry out object recognition and pedestrian detection experiments on the Caltech-101 and Caltech Pedestrian datasets. Comparisons to the best state-of-the-art methods show that the proposed system achieved better results.

