Please use this identifier to cite or link to this item: http://dspace.uniten.edu.my/jspui/handle/123456789/9120
DC FieldValueLanguage
dc.contributor.authorQayyum, A.
dc.contributor.authorMalik, A.S.
dc.contributor.authorNaufal, M.
dc.contributor.authorSaad, M.
dc.contributor.authorMazher, M.
dc.contributor.authorAbdullah, F.
dc.contributor.authorAbdullah, T.A.R.B.T.
dc.date.accessioned2018-02-21T05:00:16Z-
dc.date.available2018-02-21T05:00:16Z-
dc.date.issued2016
dc.identifier.urihttp://dspace.uniten.edu.my/jspui/handle/123456789/9120-
dc.description.abstractSparse representation is very active area in computer vision and image analysis. It has many applications in de-noising, stereo vision, image painting, image restoration, image de-blurring and many. For sparse modeling, there is need to design an appropriate dictionary. However, there are many dictionaries used for sparse modeling and were reported in literature. In this paper, we implemented the fixed dictionaries and adaptive dictionaries i.e., Method of Optimal Direction (MOD) and KSVD. Both adaptive are used for training the noisy images and computing the error and recovered the number of atoms using adaptive or small patches of images. The result showed that our proposed dictionaries performed much better for atom recovery in noisy patches of the images. The dictionary based on discrete wavelet transform (DWT) basis function with KSVD produced accurate result as compared to all other dictionaries. However, for fast convergence of RMSE value to minimum, DWT with KSVD and MOD dictionaries showed higher convergence rate as compared to discrete cosine transform (DCT) with KSVD and MOD. The computation complexity increased little using the DWT dictionary as compared to DCT dictionary. © 2015 IEEE.
dc.titleDesigning of overcomplete dictionaries based on DCT and DWT
item.grantfulltextnone-
item.fulltextNo Fulltext-
Appears in Collections:COE Scholarly Publication
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.