Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Uçar, Ayşegül"

Filter results by typing the first few letters
Now showing 1 - 4 of 4
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Citation - Scopus: 3
    Development of a deep wavelet pyramid scene parsing semantic segmentation network for scene perception in indoor environments
    (Springer Science and Business Media Deutschland GmbH, 2023) Simge Nur Aslan; Ayşegül Uçar; Cüneyt Güzeliş; Güzeliş, Cüneyt; Uçar, Ayşegül; Aslan, Simge Nur
    In this paper a new Deep Wavelet Pyramid Scene Parsing Network (DW-PSPNet) is proposed as an effective combination of Discrete Wavelet Transform (DWT) inception module the channel and spatial attention modules and PSPNet. Improved semantic segmentation via the combination to our best knowledge is not yet reported in the literature. The paper has two main contributions: (1) a new backbone network into PSPNET introduced by a combination of DWT inspection modules and attention mechanisms, (2) a new and improved version of PSPNet base structure. Further three new modifications are introduced. First the drop activation function is used to increase validation and test accuracy of the segmentation. Second a skip connection from the backbone is applied to increase validation and test accuracies by restoring the resolution of feature maps via full utilization of multilevel semantic features. Third Inverse Wavelet Transform (IWT) and convolution layer are applied to obtain the segmented images without information loss. DW-PSPNet was implemented via our own data generated by using a Robotis-Op3 humanoid robot to detect objects in indoor environments and and benchmark data set. Simulation results show higher performance of the proposed network compared with that of previous successful networks in handling semantic segmentation tasks in indoor environments. Moreover extensive experiments on the benchmark Ade20K data set were also conducted. DW-PSPNET achieved an mIoU score of 45.97% on the ADE20K validation set which are new state-of-the-art results. © 2023 Elsevier B.V. All rights reserved.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 13
    Citation - Scopus: 18
    Development of a New Robust Stable Walking Algorithm for a Humanoid Robot Using Deep Reinforcement Learning with Multi-Sensor Data Fusion
    (MDPI, 2023) Cagri Kaymak; Aysegul Ucar; Cuneyt Guzelis; Güzeliş, Cüneyt; Kaymak, Çağrı; Uçar, Ayşegül
    The difficult task of creating reliable mobility for humanoid robots has been studied for decades. Even though several different walking strategies have been put forth and walking performance has substantially increased stability still needs to catch up to expectations. Applications for Reinforcement Learning (RL) techniques are constrained by low convergence and ineffective training. This paper develops a new robust and efficient framework based on the Robotis-OP2 humanoid robot combined with a typical trajectory-generating controller and Deep Reinforcement Learning (DRL) to overcome these limitations. This framework consists of optimizing the walking trajectory parameters and posture balancing system. Multi-sensors of the robot are used for parameter optimization. Walking parameters are optimized using the Dueling Double Deep Q Network (D3QN) one of the DRL algorithms in the Webots simulator. The hip strategy is adopted for the posture balancing system. Experimental studies are carried out in both simulation and real environments with the proposed framework and Robotis-OP2's walking algorithm. Experimental results show that the robot performs more stable walking with the proposed framework than Robotis-OP2's walking algorithm. It is thought that the proposed framework will be beneficial for researchers studying in the field of humanoid robot locomotion.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 14
    Citation - Scopus: 17
    New CNN and hybrid CNN-LSTM models for learning object manipulation of humanoid robots from demonstration
    (SPRINGER, 2022) Simge Nur Aslan; Recep Ozalp; Aysegul Ucar; Cuneyt Guzelis; Güzeliş, Cüneyt; Uçar, Ayşegül; Özalp, Recep; Aslan, Simge Nur
    As the environments that human live are complex and uncontrolled the object manipulation with humanoid robots is regarded as one of the most challenging tasks. Learning a manipulation skill from human Demonstration (LfD) is one of the popular methods in the artificial intelligence and robotics community. This paper introduces a deep learning based teleoperation system for humanoid robots that imitate the human operator's object manipulation behavior. One of the fundamental problems in LfD is to approximate the robot trajectories obtained by means of human demonstrations with high accuracy. The work introduces novel models based on Convolutional Neural Networks (CNNs) CNNs-Long Short-Term Memory (LSTM) models combining the CNN LSTM models and their scaled variants for object manipulation with humanoid robots by using LfD. In the proposed LfD system six models are employed to estimate the shoulder roll position of the humanoid robot. The data are first collected in terms of teleoperation of a real Robotis-Op3 humanoid robot and the models are trained. The trajectory estimation is then carried out by the trained CNNs and CNN-LSTM models on the humanoid robot in an autonomous way. All trajectories relating the joint positions are finally generated by the model outputs. The results relating to the six models are compared to each other and the real ones in terms of the training and validation loss the parameter number and the training and testing time. Extensive experimental results show that the proposed CNN models are well learned the joint positions and especially the hybrid CNN-LSTM models in the proposed teleoperation system exhibit a more accuracy and stable results.
  • Loading...
    Thumbnail Image
    Article
    Citation - Scopus: 4
    New convolutional neural network models for efficient object recognition with humanoid robots
    (Taylor and Francis Ltd., 2022) Simge Nur Aslan; Ayşegül Uçar; Cüneyt Güzeliş; Güzeliş, Cüneyt; Uçar, Ayşegül; Aslan, Simge Nur
    Humanoid robots are expected to manipulate the objects they have not previously seen in real-life environments. Hence it is important that the robots have the object recognition capability. However object recognition is still a challenging problem at different locations and different object positions in real time. The current paper presents four novel models with small structure based on Convolutional Neural Networks (CNNs) for object recognition with humanoid robots. In the proposed models a few combinations of convolutions are used to recognize the class labels. The MNIST and CIFAR-10 benchmark datasets are first tested on our models. The performance of the proposed models is shown by comparisons to that of the best state-of-the-art models. The models are then applied on the Robotis-Op3 humanoid robot to recognize the objects of different shapes. The results of the models are compared to those of the models such as VGG-16 and Residual Network-20 (ResNet-20) in terms of training and validation accuracy and loss parameter number and training time. The experimental results show that the proposed model exhibits high accurate recognition by the lower parameter number and smaller training time than complex models. Consequently the proposed models can be considered promising powerful models for object recognition with humanoid robots. © 2022 Elsevier B.V. All rights reserved.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback