per
Research Center on Developing Advanced Technologies
Signal and Data Processing
2538-4201
2538-421X
2018-09
15
2
3
16
article
Presenting a Fuzzy Approach to Optimize Predicting High Order Time Series
Hesam Omranpour
h.omranpour@nit.ac.ir
1
Fahime Azadian
f.azadian196@yahoo.com
2
Babol Noshirvani University of Technology
Babol Noshirvani University of Technology
It is difficult to apply the real world’s conceptions due to their uncertainty. Generally, time series are known to be non-linear or non-stationary. Regarding these two features, a system should be sensitive enough to apply the unity of time series and repeat this sensitiveness in the prediction. A predict system can exactly scrutinize the hidden features of time series and also can have high predicting runs. Lots of statistical tools such as regression analysis, gradient average, exponential gradient average and auto regression gradient average are used in traditional predictions. One of the biggest challenges of these approaches is the necessity of greater observations and the avoidance of linguistic variables or subjective experts’ ideas. Also these methods are limited to linear being assumptions. In order to dominate the limitations of traditional methods, many researchers have utilized soft computations like fuzzy logic, fuzzy neural networks, evolutionary algorithms and etc.
In this paper, we proposed a new fuzzy prediction novel based on the high order fuzzy time series. Our proposed model is based on the higher order fuzzy time series prediction computational approach. In this method a group of features are evaluated, by adding the value of the preceding element of predicting element to the result of the series’ differences. At that, particle swarm optimization is used to optimize Calculation algorithm features, which renders a better performance in order to solve the problems of higher order fuzzy time series. Then by choosing the best features, a result can be inferred as the predicting value.
The performance of the approach is presented in which after the fuzzification of time series and creating the logical fuzzy relations, by using the lower limit of the predicting element’s range and its consecutive range, and the resulted difference of sequential elements, some specific computations are done and a set of features are gained. Then, using the particle swarm optimization function, the best parameter is selected. The fitness function in the proposed method has two parts: a general section (the average of all orders) and a partial (Every columns orders). In general section, the overall average of error is shown. In Every columns orders section each column individually considered. For the second to tenth order (9 PSO separate) the answer is checked. The method is as follow; we used two parameters b and d for the feature calculation algorithm. The amount of d was manually and randomly between 3 – 1000, but PSO find the amount of b.
Properties obtained by this method, have less outliers data and waste, which it causes predicted closer, with less error.
Finally, defuzzification is performed. The yielded score is the predicted integer value of considered element.
In order to decide the precision of the prediction’s rate, we compare the proposed model to other methods using the mean square error and the average error. In order to show the efficiency of the proposed approach, we have implemented this method on the Alabama University’s enrollment database. It can be observed that the suggested method provides better results compared to the other methods and also renders a lower error.
http://jsdp.rcisp.ac.ir/article-1-603-en.pdf
Predict
Time series
Optimization
Fuzzy logic
High-order fuzzy time series
Fuzzification
Defuzzification
per
Research Center on Developing Advanced Technologies
Signal and Data Processing
2538-4201
2538-421X
2018-09
15
2
17
30
article
Video Denoising Using block Shearlet Transform
Hojjat Bagherzadeh
hojjat.bagherzadehhosseinabad@stu.um.ac.ir
1
Ahad Harati
a.harati@um.ac.ir
2
Zahra Amiri
za_am10@stu.um.ac.ir
3
RajabAli KamyabiGol
kamyabi@um.ac.ir
4
Ferdowsi university of mashhad
Ferdowsi university of mashhad
Ferdowsi university of mashhad
Ferdowsi university of mashhad
Parabolic scaling and anisotropic dilation form the core of famous multi-resolution transformations such as curvelet and shearlet, which are widely used in signal processing applications like denoising. These non-adaptive geometrical wavelets are commonly used to extract structures and geometrical features of multi-dimensional signals and preserve them in noise removal treatments. In discrete setups, it is shown that shearlets can outperform other rivals since in addition to scaling, they are formed by shear operator which can fully remain on integer grid. However, the redundancy of multidimensional shearlet transform exponentially grows with respect to the number of dimensions which in turn leads to the exponential computational and space complexity. This, seriously limits the applicability of shearlet transform in higher dimensions. In contrast, separable transforms process each dimension of data independent of other dimensions which result in missing the informative relations among different dimensions of the data.
Therefore, in this paper a modified discrete shearlet transform is proposed which can overcome the redundancy and complexity issues of the classical transform. It makes a better tradeoff between completeness of the analysis achieved by processing full relations among dimensions on one hand and the redundancy and computational complexity of the resulting transform on the other hand. In fact, how dilation matrix is decomposed and block diagonalized, gives a tuning parameter for the amount of inter dimension analysis which may be used to control computation complexity and also redundancy of the resultant transform.
In the context of video denoising, three different decompositions are proposed for 3x3 dilation matrix. In each block diagonalization of this dilation matrix, one dimension is separated and the other two constitute a 2D shearlet transform. The three block shearlet transforms are computed for the input data up to three levels and the resultant coefficients are treated with automatically adjusted thresholds. The output is obtained via an aggregation mechanism which combine the result of reconstruction of these three transforms. Using experiments on standard set of videos at different levels of noise, we show that the proposed approach can get very near to the quality of full 3D shearlet analysis while it keeps the computational complexity (time and space) comparable to the 2D shearlet transform.
http://jsdp.rcisp.ac.ir/article-1-547-en.pdf
anisotropic dilation matrix
curvelet transform
multidimensional shearlet transform
block diagonal dilation matrix
video denoising
per
Research Center on Developing Advanced Technologies
Signal and Data Processing
2538-4201
2538-421X
2018-09
15
2
31
44
article
Improving Near Real Time Data Warehouse Refreshment
Isa Hazrati
i.hazrati@srttu.edu
1
Negin Daneshpour
ndaneshpour@srttu.edu
2
Shahid Rajaee Teacher Training University
Shahid Rajaee Teacher Training University
Near-real time data warehouse gives the end users the essential information to achieve appropriate decisions. Whatever the data are fresher in it, the decision would have a better result either. To achieve a fresh and up-to-date data, the changes happened in the side of source must be added to the data warehouse with little delay. For this reason, they should be transformed in to the data warehouse format. One of the famous algorithms in this area is called X-HYBRIDJOIN. In this algorithm the data characteristics of real word have been used to speed up the join operation. This algorithm keeps some partitions, which have more uses, in the main memory. In the proposed algorithm in this paper, disk-based relation is joined with input data stream. The aim of such join is to enrich stream. The proposed algorithm uses clustered index for disk-based relation and join attribute. Moreover, it is assumed that the join attribute is exclusive throughout the relation. This algorithm has improved the mentioned algorithm in two stages. At the first stage, some records of source table which are frequently accessible are detected. Detection of such records is carried out during the algorithm implementation. The mechanism is in the way that each record access is counted by a counter and if it becomes more than the determined threshold, then it is considered as the frequently used record and placed in the hash table. The hash table is used to keep the frequently used records in the main memory. When the stream is going to enter in to join area, it is searched in this table. At the second stage, the choice method of the partition which is going to load in the main memory has been changed. One dimensional array is used to choose the mentioned partition. This array helps to select a partition of source table with highest number of records for the join among all partitions of source table. Using this array in each iteration, always leads to choose the best partition loading in memory. To compare the usefulness of the suggested algorithm some experiments have been done. Experimental results show that the service rate acquired in suggested algorithm is more than the existing algorithms. Service rate is the number of joined records in a time unit. Increasing service rate causes the effectiveness of the algorithm.
http://jsdp.rcisp.ac.ir/article-1-636-en.pdf
Near Real Time Data Warehouse
Join
Data Stream
Decision Making
per
Research Center on Developing Advanced Technologies
Signal and Data Processing
2538-4201
2538-421X
2018-09
15
2
45
54
article
An Optimal Algorithm for Dividing Microscopic Images of Blood for the Diagnosis of Acute Pulmonary Lymphoblastic Cell Using the FCM Algorithm and Genetic Optimization
Abbas Karimi
akarimi@iau-arak.ac.ir
1
Leila Sadat Hoseini
rahenorayaneh@yahoo.com
2
Arak Branch, Islamic Azad UNiversity
Arak Branch, Islamic Azad UNiversity
Cancer is type of disease caused by irregular, uncontrollable growth of blood cells in bone marrow. The process of generating three main blood cells including pallets, red and white blood cells, is started from a progenitor cell called as blast. Blast generates a considerable number of immature cells which are developed affected by differentiation factors. If any interruption occurs during this process, leukemia may be initiated.
Diagnosis of leukemia is performed at hospitals or medical centers by examination of the blood tissue smeared across a slide and under a microscope by a pathologist. Processing the digital images of blood cells, in order to improve the quality of the image or highlighting the malicious segments of the image, is important in early stages of the disease.
There are four types of leukemia consisting acute or chronic and myeloid or lymphocytic. Acute lymphocytic (or lymphoblastic) leukemia (ALL) is concentrated in this study. ALL is caused by continuous generation of immature, malignant lymphocytes in bone marrow which are speeded by blood circulation to other organs.
In this research, fuzzy C-means (FCM) algorithm is applied to blood digital images for clustering purpose, neural networks for feature selection and Genetic Algorithm (GA) for optimization. This model diagnoses ALL at early stages and categorizes it into three morphological subcategories (i.e., L1, L2, and L3).
For performance evaluation of the proposed method, 38 samples of patients with ALL were collected. It was performed on 68 microscopic images in terms of 15 features and yielded to higher percentage of sensitivity, specificity, and accuracy for 10 out of 15 features. The proposed method was compared to three recent methods. The evaluations showed that the sensitivity, specificity and accuracy reached to 85.15%, 98.17% and 96.53%, respectively.
http://jsdp.rcisp.ac.ir/article-1-567-en.pdf
leukemia
FCM algorithm
neural network
genetic algorithm
clustering
per
Research Center on Developing Advanced Technologies
Signal and Data Processing
2538-4201
2538-421X
2018-09
15
2
55
68
article
Intelligent Identifications and Filtering of Unconventional Images Based on Deep Neural Networks
ali ghanbari sorkhi
ali.ghanbari289@gmail.com
1
Mansour Fateh
mansoor_fateh@yahoo.com
2
Hamid Hassanpour
h.hassanpour@shahroodut.ac.ir
3
shahrood university
shahrood university
shahrood university
Currently vast improvement of internet access and significant growth of web based broadcasters have resulted in distribution and sharing of informative resources such as images worldwide. Although this kind of sharing may bring many advantages, there are certain risks such as access of kids to porn images which should not be neglected. In fact, access to these images can be a threat to the culture of any society where kids and adults are included. However, many of internet users are members of social websites including Facebook or Instagram and without an appropriate intelligent filtering system, presence of few unconventional images may result in total filtering of these websites causing unpleasant feeling of members. In this paper, an attempt was made to propose an approach for classification and intelligent filtering of unconventional images. One of the major issues on these occasions is the analysis of a large scale of data available in the websites which might be a very time consuming task. A deep neural network might be a good option to resolve this issue and provide a good accuracy in dealing with huge databases. In this research, a new architecture for identifying unconventional images is proposed. In the proposed approach, the new architecture is presented with a combination of AlexNet and LeNet architecture that uses convolutional, polling and fully-connected layers.
The activation function used in this architecture, is the Rectified Linear Unit (ReLU) function. The reason of using this activation function is the high speed of convergence in deep convolution networks and simplicity in implementation. The proposed architecture consists of several parts. The first two parts consist of convolutional layers, ReLUs and pooling. In this section, convolution is applied to the input image with different dimensions and filters. In the next section, the convolutional layer with ReLU is used without pooling. The next section, like the first two parts, includes convolutional layers, ReLU and pooling. Finally, the last three parts include the fully-connected layers with ReLU. The output of the last layer is the two classes, which specifies the degree of belonging of each input to the class of unconventional and conventional images. The results are tested on a large-scale dataset. These tests show that the proposed method is more accurate than the other methods recently developed for identifying unconventional images.
http://jsdp.rcisp.ac.ir/article-1-590-en.pdf
Intelligent filtering system
unconventional images
deep neural network
conventional neural network
per
Research Center on Developing Advanced Technologies
Signal and Data Processing
2538-4201
2538-421X
2018-09
15
2
69
88
article
Smile and Laugh Expressions Detection Based on Local Minimum Key Points
Mina Mohammadi Dashti
m.mohammadi96@yahoo.com
1
Majid Harouni
m.harouni@iauda.ac.ir
2
Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University
Faculty of Computer Engineering, Dolatabad Branch, Islamic Azad University
In this paper, a smile and laugh facial expression is presented based on dimension reduction and description process of the key points. The paper has two main objectives; the first is to extract the local critical points in terms of their apparent features, and the second is to reduce the system’s dependence on training inputs. To achieve these objectives, three different scenarios on extracting the features are proposed. First of all, the discrete parts of a face are detected by local binary pattern method that is used to extract a set of global feature vectors for texture classification considering various regions of an input-image face. Then, in the first scenario and with respect to the correlation changes of adjacent pixels on the texture of a mouth area, a set of local key points are extracted using the Harris corner detector. In the second scenario, the dimension reduction of the extracted points of first scenario provided by principal component analysis algorithm leading to reduction in computational costs and overall complexity without loss of performance and flexibility; and in the final scenario, a set of critical points is extracted through comparing the extracted points’ coordinates of the first scenario and the BRISK Descriptor, which is utilized a neighborhood sampling strategy of directions for a key-point. In the following, without training the system, facial expressions are detected by comparing the shape and the geometric distance of the extracted local points of the mouth area. The well-known standard Cohn-Kaonde, CAFÉ, JAFFE and Yale benchmark dataset are applied to evaluate the proposed approach. The results shows an overall enhancement of 6.33% and 16.46% for second scenario compared with first scenario and third scenario compared with second scenario. The experimental results indicate the power efficiency of the proposed approach in recognizing images more than 90 % across all the datasets.
http://jsdp.rcisp.ac.ir/article-1-658-en.pdf
Local key points extraction
facial expression detection
corner detector
descriptor algorithm
dimension reduction
per
Research Center on Developing Advanced Technologies
Signal and Data Processing
2538-4201
2538-421X
2018-09
15
2
89
102
article
A Semi-supervised Framework Based on Self-constructed Adaptive Lexicon for Persian Sentiment Analysis
Mohsen Najafzadeh
mohsen.najafzadeh@mshdiau.ac.ir
1
Saeed Rahati Quchan
rahati@mshdiau.ac.ir
2
Reza Ghaemi
r.ghaemi@iauq.ac.ir
3
Mashhad Branch, Islamic Azad University
Mashhad Branch, Islamic Azad University
Quchan Branch, Islamic Azad University
With the appearance of Web 2.0 and 3.0, users’ contribution to WWW has created a huge amount of valuable expressed opinions. Considering the difficulty or impossibility of manually analyzing such big data, sentiment analysis, as a branch of natural language processing, has been highly considered. Despite the other (popular) languages, a limited number of research studies have been conducted in Persian sentiment analysis. In this study, for the first time, a semi-supervised framework is proposed for Persian sentiment analysis. Moreover, considering that one of the most recent studies in Persian, is an algorithm based on extracting adaptive (dataset-sensitive) expert-based emotional patterns. In this research, extraction of the same state-of-the-art emotional patterns is proposed to be performed automatically. Moreover, application of the HMM classifier, by utilizing the mentioned features (as its states) is analyzed; and additionally, HMM-based sentiment analysis is upgraded by being combined with a rule-based classifier for the opinion assignment process. In addition, toward intelligent self-training, a criterion for evaluating, the high reliability of output is presented by which (assuming satisfaction of the criterion) the self-training process is performed in “lexicon-extraction” and “classifier,” as learning systems. The proposed method, by being applied on the basis dataset, provides 90% of accuracy (despite its expert-independent lexicon generation nature), which in comparison with the supervised and semi-supervised methods in the state-of-the-art has a considerable superiority. Moreover, this semi-supervised method is evaluated by a 10/90 ratio of train/ test and its reliability is demonstrated by providing 80% of accuracy.
http://jsdp.rcisp.ac.ir/article-1-644-en.pdf
Opinion Mining
Self-training
Self-constructed Lexicon
Hidden Markov Model
Adaptive Dictionary
per
Research Center on Developing Advanced Technologies
Signal and Data Processing
2538-4201
2538-421X
2018-09
15
2
103
118
article
A Dynamic Programing Algorithm for Tuning Concurrency of Business Processes
Mehdi Yaghoubi
mehdi.yaghoubi@gmail.com
1
Morteza Zahedi
zahedi@ganjineh.co.ir
2
Alireza Ahmadyfard
ahmadyfard@shahroodut.ac.ir
3
Shahrood university of technology
Shahrood university of technology
Shahrood university of technology
Business process management systems (BPMS) are vital complex information systems to compete in the global market and to increase economic productivity. Workload balancing of resources in BPMS is one of the challenges have been long studied by researchers. Workload balancing of resources increases the system stability, improves the efficiency of the resources and enhances the quality of their products. Workload balancing of resources in BPMS is considered as an important factor of the performance and the stability in systems. Setting the workload of each source at a certain level increases the efficiency of the resources.
The main objectives of this research are the concept of resource workload balance and uniformity of the workload for each source at a specified level. To optimize the balance workload and uniformity of each source, the setting multi-process concurrency was offered and studied. Also, the regulation of multi-process concurrency was mentioned as an optimization problem. In this paper, tuning concurrency of the business process is introduced as a problem in BPMS, which is an application issue to improve at workload balance of resources and uniformity in the workload of each resource.
To solve this problem, a delay vector is defined, each element of delay vector makes the synthetic delay at the first of each business process, then a dynamic optimization algorithm is presented to compute delay vector and the speed of the proposed algorithms is compared with and state-space search algorithm and evolutionary algorithm of PSO. The comparison shows that the speed of the proposed algorithm is 37 hours to 5.8 years compared to the state-space search algorithm, while the POS algorithm solves the same problem in just 3 minutes. The experimental results on a real dataset show 21.64 percent improvement in the performance of the proposed algorithm.
http://jsdp.rcisp.ac.ir/article-1-623-en.pdf
Business process management systems
tuning concurrency of business processes
workload balancing
dynamic optimization
time complexity
per
Research Center on Developing Advanced Technologies
Signal and Data Processing
2538-4201
2538-421X
2018-09
15
2
119
132
article
A hybrid recommender system using trust and bi-clustering in order to increase the efficiency of collaborative filtering
Monireh Hosseini
hosseini@kntu.ac.ir
1
Maghsood Nasrollahi
Maghsod68@gmail.com
2
Ali Baghaei
a.baghaei@mail.kntu.ac.ir
3
K. N. Toosi University of Technology
K. N. Toosi University of Technology
K. N. Toosi University of Technology
In the present era, the amount of information grows exponentially. So, finding the required information among the mass of information has become a major challenge. The success of e-commerce systems and online business transactions depend greatly on the effective design of products recommender mechanism. Providing high quality recommendations is important for e-commerce systems to assist users in making effective selection decisions from a plethora of choices. Recommender systems have been developed in order to respond this problem in order to customize the required information for users.
So far, several types of recommender systems have been developed such as collaborative filtering recommender systems, content-based recommender systems and knowledge-based recommender systems. Each of these systems has advantages and disadvantages. Most of the recommender systems are based on collaborative filtering; Collaborative filtering is a widely accepted technique to generate recommendations based on the ratings of like-minded users. In fact, the main idea of this technique is to benefit from the past behavior or existing beliefs of the user community to predict products that are likely to be liked by the current user of the system. In collaborative filtering, we use the similarity between users or items to recommend products. However, this technique has several inherent problems such as cold start, sparsity and scalability.
Since the collaborative filtering system is considered to be the most widely used recommender system, solving these problems and improving the effectiveness of collaborative filtering is one of the challenges raised in this context. None of the proposed hybrid systems have ever been able to resolve all of the collaborative filtering problems in a single and desirable manner; in this paper, we proposed a new hybrid recommender system that applies trust network as well as bi-clustering to improve the effectiveness of collaborative filtering. Therefore, the objectives of this research can be summarized as follows: sparsity reduction, increasing the speed of producing recommendations and increasing the accuracy of recommendations.
In the proposed system, the trust between users is used to fill the user-item matrix which is a sparse matrix to solve the existing problem of sparsity. Then using bi-clustering, the user-item matrix is subdivided into matrices to solve the problem of scalability of the collaborative filtering and then the collaborative filtering is implemented for each sub matrix and the results from the implementation of the collaborative filtering for the sub-matrices are combined and recommendations are made for the users.
The experimental results on a subset of the extended Epinions dataset verify the effectiveness and efficiency of our proposed system over user-based collaborative filtering and hybrid collaborative filtering with trust techniques.
Improve sparsity problem
Experimental results showed that our proposed system solves some of the sparsity problems which is due to the using the trust in the hybrid recommender system. By using trust, we can predict many uncertain ratings. Thus, transforming the user-item sparsity matrix into a half-full matrix.
Improve scalability problem
The results show that the proposed system has a higher speed compared with the user-based collaborative filtering algorithm and hybrid collaborative filtering with trust, and increasing the volume of data has little effect on increase online computing time. The reason can be summarized as a using of bi-clustering. Bi-directional clusters are made offline and break down the matrix of rankings into smaller subsets. Implementing the collaborative filtering on these smaller sets has led to increased computing speed.
Improve the new user problem
This system can provide accurate results for the new users due to the use of trust, because product collections viewed by new user can increase with the trust between the users. This system can predict the similarity between the new user and other users. So, the results are more accurate than the results of the user-based collaborative filtering and hybrid collaborative filtering with trust.
http://jsdp.rcisp.ac.ir/article-1-613-en.pdf
Recommender systems
Collaborative filtering
Trust
Bi-clustering
Hybrid recommender systems
per
Research Center on Developing Advanced Technologies
Signal and Data Processing
2538-4201
2538-421X
2018-09
15
2
133
147
article
An Improved Rician Noise Correction Technique from the Magnitude of Diffusion MR Images
Marzieh Nezamzadeh
m.nezamzadeh@modares.ac.ir
1
Tarbiat Modares University
The true MR signal intensity extracted from noisy MR magnitude images is biased with the Rician noise caused by noise rectification in the magnitude calculation for low intensity pixels. This noise is more problematic when a quantitative analysis is performed based on the magnitude images with low SNR(<3.0). In such cases, the received signal for both the real and imaginary components will fluctuate around a low level (e.g. zero) often producing negative values. The magnitude calculation on such signals will rectify all negative values to produce only positive magnitudes, thereby artificially raising the average level of these pixels. The signal thus will be biased by the rectified noise. Diffusion MRI using high b-values (using strong magnetic gradients) is one the most important cases of biased Rician noise. A technique for removing this bias from individual pixels of magnitude MR images is presented in this study. This method provides a bias correction for individual pixels using a linear equation with the correction term separated from the term to be corrected (i.e. the pixel intensity). The correction is exact when the mean and variance of the pixel intensity probability density functions are known. When accurate mean values are not available, a nearest neighbor average is used to approximate the mean in the calculation of the linear correction term. With a nine pixel nearest neighbor average (i.e. one layer of nearest neighbors) the bias correction for individual pixel intensities is accurate to within 10% error for signal to noise ratios SNR=1.0. Several different noise correction schemes from the literature are presented and compared. The new Rician bias correction presented in this work represents a significant improvement over previously published techniques. The proposed approach substantially removes the Rician noise bias from diffusion MR signal decay over an extended range of b-values from zero to very high b-values.
http://jsdp.rcisp.ac.ir/article-1-643-en.pdf
magnitude signal
Diffusion MRI
probability distribution function
Rician bias