Please use this identifier to cite or link to this item: http://dspace.uniten.edu.my/jspui/handle/123456789/9173
Title: Comparison study of computational parameter values between LRN and NARX in identifying nonlinear systems
Authors: Nordin, F.H. 
Nagi, F.H. 
Zainul Abidin, A.A. 
Issue Date: 2013
Abstract: To determine the nonlinear autoregressive model with exogenous inputs (NARX) parameter values is not an easy task, even though NARX is reported to successfully identify nonlinear systems. Apart from the activation functions, number of layers, layer size, learning rate, and number of epochs, the number of delays at the input and at the feedback loop need to also be determined. The layer recurrent network (LRN) is seen to have the potential to outperform NARX. However, not many papers have reported on using the LRN to identify nonlinear systems. Therefore, it is the aim of this paper to investigate and analyze the parametric evaluation of the LRN and NARX in identifying 3 different types of nonlinear systems. From the 3 nonlinear systems, the satellite's attitude state space is more complex compared to the sigmoid and polynomial equations. To ensure an unbiased comparison, a general guideline is used to select the parameter values in an organized manner. The LRN and NARX performance is analyzed based on the training and architecture parameters, mean squared errors, and correlation coefficient values. The results show that the LRN outperformed NARX in training quality, needs equal or fewer parameters that need to be determined through heuristic processes and equal or lower number of epochs, and produced a smaller training error compared to NARX, especially when identifying the satellite's attitude. This indicates that the LRN has the capability of identifying a more complex and nonlinear system compared to NARX. © Tübi̇tak.
URI: http://dspace.uniten.edu.my/jspui/handle/123456789/9173
Appears in Collections:COGS Scholarly Publication

Show full item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.