Showing 9 results for Wavelet Transform
Sh. Mahmoudi-Barmas, Sh. Kasaei,
Volume 4, Issue 1 (1-2008)
Abstract
Image registration is a crucial step in most image processing tasks for which the
final result is achieved from a combination of various resources. In general, the majority of
registration methods consist of the following four steps: feature extraction, feature
matching, transform modeling, and finally image resampling. As the accuracy of a
registration process is highly dependent to the feature extraction and matching methods, in
this paper, we have proposed a new method for extracting salient edges from satellite
images. Due to the efficiency of multiresolution data representation, we have considered
four state-of-the-art multiresolution transforms –namely, wavelet, curvelet, complex
wavelet and contourlet transform- in the feature extraction step of the proposed image
registration method. Experimental results and performance comparison among these
transformations showed the high performance of the contourlet transform in extracting
efficient edges from satellite images. Obtaining salient, stable and distinguishable features
increased the accuracy of the proposed registration process.
Sujan Rajbhandari, Zabih Ghassemlooy, Maia Angelova,
Volume 5, Issue 2 (6-2009)
Abstract
Artificial neural network (ANN) has application in communication engineering in diverse areas such as channel equalization, channel modeling, error control code because of its capability of nonlinear processing, adaptability, and parallel processing. On the other hand, wavelet transform (WT) with both the time and the frequency resolution provides the exact representation of signal in both domains. Applying these signal processing tools for channel compensation and noise reduction can provide an enhanced performance compared to the traditional tools. In this paper, the slot error rate (SER) performance of digital pulse interval modulation (DPIM) in diffuse indoor optical wireless (OW) links subjected to the artificial light interference (ALI) is reported with new receiver structure based on the discrete WT (DWT) and ANN. Simulation results show that the DWT-ANN based receiver is very effective in reducing the effect of multipath induced inter-symbol interference (ISI) and ALI.
M. M Daevaeiha, M. R Homaeinezhad, M. Akraminia, A. Ghaffari, M. Atarod,
Volume 6, Issue 3 (9-2010)
Abstract
The aim of this study is to introduce a new methodology for isolation of ectopic
rhythms of ambulatory electrocardiogram (ECG) holter data via appropriate statistical
analyses imposing reasonable computational burden. First, the events of the ECG signal are
detected and delineated using a robust wavelet-based algorithm. Then, using Binary
Neyman-Pearson Radius test, an appropriate classifier is designed to categorize ventricular
complexes into "Normal + Premature Atrial Contraction (PAC)" and "Premature
Ventricular Contraction (PVC)" beats. Afterwards, an innovative measure is defined based
on wavelet transform of the delineated P-wave namely as P-Wave Strength Factor (PSF)
used for the evaluation of the P-wave power. Finally, ventricular contractions pursuing
weak P-waves are categorized as PAC complexes however, those ensuing strong P-waves
are specified as normal complexes. The discriminant quality of the PSF-based feature space
was evaluated by a modified learning vector quantization (MLVQ) classifier trained with
the original QRS complexes and corresponding Discrete Wavelet Transform (DWT) dyadic
scale. Also, performance of the proposed Neyman-Pearson Classifier (NPC) is compared
with the MLVQ and Support Vector Machine (SVM) classifiers using a common feature
space. The processing speed of the proposed algorithm is more than 176,000 samples/sec
showing desirable heart arrhythmia classification performance. The performance of the
proposed two-lead NPC algorithm is compared with MLVQ and SVM classifiers and the
obtained results indicate the validity of the proposed method. To justify the newly defined
feature space (σi1, σi2, PSFi), a NPC with the proposed feature space and a MLVQ
classification algorithm trained with the original complex and its corresponding DWT as
well as RR interval are considered and their performances were compared with each other.
An accuracy difference about 0.15% indicates acceptable discriminant quality of the
properly selected feature elements. The proposed algorithm was applied to holter data of
the DAY general hospital (more than 1,500,000 beats) and the average values of Se =
99.73% and P+ = 99.58% were achieved for sensitivity and positive predictivity,
respectively.
M. R. Homaeinezhad, A. Ghaffari, H. Najjaran Toosi, M. Tahmasebi, M. M. Daevaeiha,
Volume 7, Issue 1 (3-2011)
Abstract
In this study, a new long-duration holter electrocardiogram (ECG) major events detection-delineation algorithm is described which operates based on the false-alarm error bounded segmentation of a decision statistic with simple mathematical origin. To meet this end, first three-lead holter data is pre-processed by implementation of an appropriate bandpass finite-duration impulse response (FIR) filter and also by calculation of the Euclidean norm between corresponding samples of three leads. Then, a trous discrete wavelet transform (DWT) is applied to the resulted norm and an unscented synthetic measure is calculated between some obtained dyadic scales to magnify the effects of low-power waves such as P or T-waves during occurrence of arrhythmia(s). Afterwards, a uniform length window is slid sample to sample on the synthetic scale and in each slid, six features namely as summation of the nonlinearly amplified Hilbert transform, summation of absolute first order differentiation, summation of absolute second order differentiation, curve length, area and variance of the excerpted segment are calculated. Then all feature trends are normalized and superimposed to yield the newly defined multiple-order derivative wavelet based measure (MDWM) for the detection and delineation of ECG events. In the next step, a α-level Neyman-Pearson classifier (which is a false-alarm probability-FAP controlled tester) is implemented to detect and delineate QRS complexes. To show advantages of the presented method, it is applied to MIT-BIH Arrhythmia Database, QT Database, and T-Wave Alternans Database and as a result, the average values of sensitivity and positive predictivity Se = 99.96% and P+ = 99.96% are obtained for the detection of QRS complexes, with the average maximum delineation error of 5.7 msec, 3.8 msec and 6.1 msec for P-wave, QRS complex and T-wave, respectively showing marginal improvement of detection-delineation performance. In the next step, the proposed method is applied to DAY hospital high resolution holter data (more than 1,500,000 beats including Bundle Branch Blocks-BBB, Premature Ventricular Complex-PVC and Premature Atrial Complex-PAC) and average values of Se=99.98% and P+=99.97% are obtained for QRS detection. In summary, marginal performance improvement of ECG events detection-delineation process in a widespread values of signal to noise ratio (SNR), reliable robustness against strong noise, artifacts and probable severe arrhythmia(s) of high resolution holter data and the processing speed 163,000 samples/sec can be mentioned as important merits and capabilities of the proposed algorithm.
S. Mohammadi, S. Talebi, A. Hakimi,
Volume 8, Issue 2 (6-2012)
Abstract
In this paper we introduce two innovative image and video watermarking
algorithms. The paper’s main emphasis is on the use of chaotic maps to boost the
algorithms’ security and resistance against attacks. By encrypting the watermark
information in a one dimensional chaotic map, we make the extraction of watermark for
potential attackers very hard. In another approach, we select embedding positions by a two
dimensional chaotic map which enables us to satisfactorily distribute watermark
information throughout the host signal. This prevents concentration of watermark data in a
corner of the host signal which effectively saves it from being a target for attacks that
include cropping of the signal. The simulation results demonstrate that the proposed
schemes are quite resistant to many kinds of attacks which commonly threaten
watermarking algorithms.
E. Ehsaeyan,
Volume 12, Issue 1 (3-2016)
Abstract
The use of wavelets in denoising, seems to be an advantage in representing well the details. However, the edges are not so well preserved. Total variation technique has advantages over simple denoising techniques such as linear smoothing or median filtering, which reduce noise, but at the same time smooth away edges to a greater or lesser degree. In this paper, an efficient denoising method based on Total Variation model (TV), and Dual-Tree Complex Wavelet Transform (DTCWT) is proposed to incorporate both properties. In our method, TV is employed to refine low-passed coefficients and DTCWT is used to shrink high-passed noisy coefficients to achieve more accurate image recovery. The efficiency of our approach is firstly analyzed by comparing the results with well-known methods such as probShrink, BLS-GSM, SUREbivariate, NL-Means and TV model. Secondly, it is compared to some denoising methods, which have been reported recently. Experimental results show that the proposed method outperforms the Steerable pyramid denoising by 8.5% in terms of PSNR and 17.5% in terms of SSIM for standard images. Obtained results convince us that the proposed scheme provides a better performance in noise blocking among reported state-of-the-art methods.
E. Ehsaeyan,
Volume 13, Issue 3 (9-2017)
Abstract
Image denoising as a pre-processing stage is a used to preserve details, edges and global contrast without blurring the corrupted image. Among state-of-the-art algorithms, block shrinkage denoising is an effective and compatible method to suppress additive white Gaussian noise (AWGN). Traditional NeighShrink algorithm can remove the Gaussian noise significantly, but loses the edge information instead. To overcome this drawback, this paper aims to develop an improvement shrinkage algorithm in the wavelet space based on the NeighSURE Shrink. We establish a novel function to shrink neighbor coefficients and minimize Stein’s Unbiased Risk Estimate (SURE). Some regularization parameters are employed to form a flexible threshold and can be adjusted via genetic algorithm (GA) as an optimization method with SURE fitness function. The proposed function is verified to be competitive or better than the other Shrinkage algorithms such as OracleShrink, BayesShrink, BiShrink, ProbShrink and SURE Bivariate Shrink in visual quality measurements. Overall, the corrected NeighShrink algorithm improves PSNR values of denoised images by 2 dB.
M. Shams Esfand Abadi, H. Mesgarani, S. M. Khademiyan,
Volume 13, Issue 3 (9-2017)
Abstract
The wavelet transform-domain least-mean square (WTDLMS) algorithm uses the self-orthogonalizing technique to improve the convergence performance of LMS. In WTDLMS algorithm, the trade-off between the steady-state error and the convergence rate is obtained by the fixed step-size. In this paper, the WTDLMS adaptive algorithm with variable step-size (VSS) is established. The step-size in each subfilter changes according to the largest decrease in mean square deviation. The simulation results show that the proposed VSS-WTDLMS has faster convergence rate and lower misadjustment than ordinary WTDLMS.
M. K. Saini, R. K. Beniwal,
Volume 14, Issue 2 (6-2018)
Abstract
This paper presents a new framework based on modified EMD method for detection of single and multiple PQ issues. In modified EMD, DWT precedes traditional EMD process. This scheme makes EMD better by eliminating the mode mixing problem. This is a two step algorithm; in the first step, input PQ signal is decomposed in low and high frequency components using DWT. In the second stage, the low frequency component is further processed with EMD technique to get IMFs. Eight features are extracted from IMFs of low frequency component. Unlike low frequency component, features are directly extracted from the high frequency component. All these features form feature vector which is fed to PNN classifier for classification of PQ issues. For comparative analysis of performance of PNN, results are compared with SVM classifier. Moreover, performance of proposed methodology is also validated with noisy PQ signals. PNN has outperformed SVM for both noiseless and noisy PQ signals.