Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Aslan, Simge Nur"

Filter results by typing the first few letters
Now showing 1 - 8 of 8
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Citation - Scopus: 3
    Development of a deep wavelet pyramid scene parsing semantic segmentation network for scene perception in indoor environments
    (Springer Science and Business Media Deutschland GmbH, 2023) Simge Nur Aslan; Ayşegül Uçar; Cüneyt Güzeliş; Güzeliş, Cüneyt; Uçar, Ayşegül; Aslan, Simge Nur
    In this paper a new Deep Wavelet Pyramid Scene Parsing Network (DW-PSPNet) is proposed as an effective combination of Discrete Wavelet Transform (DWT) inception module the channel and spatial attention modules and PSPNet. Improved semantic segmentation via the combination to our best knowledge is not yet reported in the literature. The paper has two main contributions: (1) a new backbone network into PSPNET introduced by a combination of DWT inspection modules and attention mechanisms, (2) a new and improved version of PSPNet base structure. Further three new modifications are introduced. First the drop activation function is used to increase validation and test accuracy of the segmentation. Second a skip connection from the backbone is applied to increase validation and test accuracies by restoring the resolution of feature maps via full utilization of multilevel semantic features. Third Inverse Wavelet Transform (IWT) and convolution layer are applied to obtain the segmented images without information loss. DW-PSPNet was implemented via our own data generated by using a Robotis-Op3 humanoid robot to detect objects in indoor environments and and benchmark data set. Simulation results show higher performance of the proposed network compared with that of previous successful networks in handling semantic segmentation tasks in indoor environments. Moreover extensive experiments on the benchmark Ade20K data set were also conducted. DW-PSPNET achieved an mIoU score of 45.97% on the ADE20K validation set which are new state-of-the-art results. © 2023 Elsevier B.V. All rights reserved.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 6
    Development of Deep Learning Algorithm for Humanoid Robots to Walk to the Target Using Semantic Segmentation and Deep Q Network
    (Institute of Electrical and Electronics Engineers Inc., 2020) Guzelis, Cuneyt; Ucar, Aysegul; Aslan, Simge Nur
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 6
    End-To-End Learning from Demonstation for Object Manipulation of Robotis-Op3 Humanoid Robot
    (Institute of Electrical and Electronics Engineers Inc., 2020) Simge Nur Aslan; Recep Ozalp; Ayşegül Uçar; Cüneyt Güzeliş; Uear, Aysegul; Guzelis, Cunevt; Ozalp, Recep; Aslan, Simge Nur; M. Ivanovic , T. Yildirim , G. Trajcevski , C. Badica , L. Bellatreche , I. Kotenko , A. Badica , B. Erkmen , M. Savic
    Humanoid robots are deployed ranging from houses and hotels to healthcare and industry environments to help people. Robots can be easily programed by users to predefined tasks such as walking grasping stand-up and shake-up. However in these days all robots are expected to learn itself from the obtained experience by watching the environment and people in there. In this study it is aimed for Robotis-Op3 humanoid robot to grasp the objects by learning from demonstrations based on vision. A new algorithm is proposed for this purpose. Firstly the robot is manipulated from user commands and the raw images from the camera of Robotis-Op3 are collected. Secondly a semantic segmentation algorithm is applied to detect and recognize the objects. A new model using Convolutional Neural Networks (CNNs) and Long Short-Term Memory Networks (LSTMs) is then proposed to learn the user demonstrations. The results were compared in terms of training time performance and model complexity. Simulation results showed that new models produced a high performance for object manipulation. © 2020 Elsevier B.V. All rights reserved.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 10
    Fast Object Recognition for Humanoid Robots by Using Deep Learning Models with Small Structure
    (Institute of Electrical and Electronics Engineers Inc., 2020) Simge Nur Aslan; Ayşegül Uçar; Cüneyt Güzeliş; Guzelis, Cuneyt; Ucar, Aysegul; Aslan, Simge Nur; M. Ivanovic , T. Yildirim , G. Trajcevski , C. Badica , L. Bellatreche , I. Kotenko , A. Badica , B. Erkmen , M. Savic
    In these days the humanoid robots are expected to help people in healthcare house and hotels industry military and the other security environments by performing specific tasks or to replace with people in dangerous scenarios. For this purpose the humanoid robots should be able to recognize objects and then to do the desired tasks. In this study it is aimed for Robotis-Op3 humanoid robot to recognize the different shaped objects with deep learning methods. First of all new models with small structure of Convolutional Neural Networks (CNNs) were proposed. Then the popular deep neural networks models such as VGG16 and Residual Network (ResNet) that is good at object recognition were used for comparing at recognizing the objects. The results were compared in terms of training time performance and model complexity. Simulation results show that new models with small layer structure produced higher performance than complex models. © 2020 Elsevier B.V. All rights reserved.
  • Loading...
    Thumbnail Image
    Book Part
    Citation - WoS: 1
    Citation - Scopus: 3
    Learning to move an object by the humanoid robots by using deep reinforcement learning
    (IOS Press, 2021) Simge Nur Aslan; Burak Taşçi; Ayşegül Uçar; Cüneyt Güzeliş; Tasci, Burak; Guzelis, Cuneyt; Ucar, Aysegul; Aslan, Simge Nur
    This paper proposes an algorithm for learning to move the desired object by humanoid robots. In this algorithm the semantic segmentation algorithm and Deep Reinforcement Learning (DRL) algorithms are combined. The semantic segmentation algorithm is used to detect and recognize the object be moved. DRL algorithms are used at the walking and grasping steps. Deep Q Network (DQN) is used to walk towards the target object by means of the previously defined actions at the gate manager and the different head positions of the robot. Deep Deterministic Policy Gradient (DDPG) network is used for grasping by means of the continuous actions. The previously defined commands are finally assigned for the robot to stand up turn left side and move forward together with the object. In the experimental setup the Robotis-Op3 humanoid robot is used. The obtained results show that the proposed algorithm has successfully worked. © 2021 Elsevier B.V. All rights reserved.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 14
    Citation - Scopus: 17
    New CNN and hybrid CNN-LSTM models for learning object manipulation of humanoid robots from demonstration
    (SPRINGER, 2022) Simge Nur Aslan; Recep Ozalp; Aysegul Ucar; Cuneyt Guzelis; Güzeliş, Cüneyt; Uçar, Ayşegül; Özalp, Recep; Aslan, Simge Nur
    As the environments that human live are complex and uncontrolled the object manipulation with humanoid robots is regarded as one of the most challenging tasks. Learning a manipulation skill from human Demonstration (LfD) is one of the popular methods in the artificial intelligence and robotics community. This paper introduces a deep learning based teleoperation system for humanoid robots that imitate the human operator's object manipulation behavior. One of the fundamental problems in LfD is to approximate the robot trajectories obtained by means of human demonstrations with high accuracy. The work introduces novel models based on Convolutional Neural Networks (CNNs) CNNs-Long Short-Term Memory (LSTM) models combining the CNN LSTM models and their scaled variants for object manipulation with humanoid robots by using LfD. In the proposed LfD system six models are employed to estimate the shoulder roll position of the humanoid robot. The data are first collected in terms of teleoperation of a real Robotis-Op3 humanoid robot and the models are trained. The trajectory estimation is then carried out by the trained CNNs and CNN-LSTM models on the humanoid robot in an autonomous way. All trajectories relating the joint positions are finally generated by the model outputs. The results relating to the six models are compared to each other and the real ones in terms of the training and validation loss the parameter number and the training and testing time. Extensive experimental results show that the proposed CNN models are well learned the joint positions and especially the hybrid CNN-LSTM models in the proposed teleoperation system exhibit a more accuracy and stable results.
  • Loading...
    Thumbnail Image
    Article
    Citation - Scopus: 4
    New convolutional neural network models for efficient object recognition with humanoid robots
    (Taylor and Francis Ltd., 2022) Simge Nur Aslan; Ayşegül Uçar; Cüneyt Güzeliş; Güzeliş, Cüneyt; Uçar, Ayşegül; Aslan, Simge Nur
    Humanoid robots are expected to manipulate the objects they have not previously seen in real-life environments. Hence it is important that the robots have the object recognition capability. However object recognition is still a challenging problem at different locations and different object positions in real time. The current paper presents four novel models with small structure based on Convolutional Neural Networks (CNNs) for object recognition with humanoid robots. In the proposed models a few combinations of convolutions are used to recognize the class labels. The MNIST and CIFAR-10 benchmark datasets are first tested on our models. The performance of the proposed models is shown by comparisons to that of the best state-of-the-art models. The models are then applied on the Robotis-Op3 humanoid robot to recognize the objects of different shapes. The results of the models are compared to those of the models such as VGG-16 and Residual Network-20 (ResNet-20) in terms of training and validation accuracy and loss parameter number and training time. The experimental results show that the proposed model exhibits high accurate recognition by the lower parameter number and smaller training time than complex models. Consequently the proposed models can be considered promising powerful models for object recognition with humanoid robots. © 2022 Elsevier B.V. All rights reserved.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 4
    Semantic Segmentation for Object Detection and Grasping with Humanoid Robots
    (Institute of Electrical and Electronics Engineers Inc., 2020) Guzelis, Cuneyt; Ucar, Aysegul; Aslan, Simge Nur
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback