期刊论文详细信息
Alexandria Engineering Journal
PackerRobo: Model-based robot vision self supervised learning in CART
Jian Ping Li1  Zulkefli Mansor1  Hesham Alhumyani1  Naushad Varish2  Shayla Islam3  Majid Alshammari4  Rashid A. Saeed5  Asif Khan6  Mohammad Kamrul Hasan6 
[1] Corresponding authors.;Center for Cyber Security, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia (UKM), 43600 Bangi, Malaysia;Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Guntur, AP 522502, India;Department of Computer Science, Institute of Computer Science and Digital Innovation, UCSI University, Malaysia;Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, UKM Bangi 43600, Malaysia;School of Computer Science and Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China;
关键词: Information Retrieval;    Robotics Vision and Control;    Machine Learning;    Decision Process Model;    CART;   
DOI  :  
来源: DOAJ
【 摘 要 】

Robots are most widely used to replace human contribution with machine generated response. When humans interact with robots, its mandatory for both to forecast actions based on current conditions. Huge efforts have been channelized towards attaining this perfect coordination. To decipher complex environments, the inference of robotic mobility and alteration of random unstructured scenarios is a complicated task in the field of visual processing and imaging. To address this issue, a new Vision-Based Interaction Model based on deep neural networks has been suggested. The proposed model solves the error amplification issue by the application of past inputs through features as reposed by a Deep Belief Network (DBN). In addition, a novel Vision-Based Robotics Learning model is also proposed for scene understanding and recognition using deep neural network understanding. Moreover, a vision theory-based smart learning algorithm is also suggested to decide positive possible outcomes.Therefore, the model is capable of using object motions to extract relevant information used for Turning, Griping and object mobility.To validate the suggested model, a number of experiments have been performed on benchmark datasets and it showed a higher performance as evaluated against some of the niche methods.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:2次