1 2538-4201 Research Center on Developing Advanced Technologies 375 Paper Lip Reading: a New Authentication Method in Android Mobile Phone’s Applications Lesani Fatemeh Sadat b Fotouhi Ghazvini Faranak c Dianat Rouhollah d b Qom University c Qom University d Qom University 1 6 2017 14 1 3 14 30 05 2015 15 01 2016 Today, mobile phones are one of the first instruments every individual person interacts with. There are lots of mobile applications used by people to achieve their goals. One of the most-used applications is mobile banks. Security in m-bank applications is very important, therefore modern methods of authentication is required. Most of m-bank applications use text passwords which can be stolen by key-loggers. Key-loggers are hidden software to record the keys struck by users. To overcome the key-logging issue, One-Time Passwords are used. They are secure but require additional tools to be used, therefore they cannot be user-friend. Moreover, the voice-based passwords are not secure enough, since they can be heard by other people easily. In other hand, Image-based passwords cannot satisfy users, cause of screen limitation in mobile phones.  In this article, a new authentication method is introduced. The password is based on user lip’s motion which is received via a mobile cellphone camera.  The visual information extracted from the user’s lips movement forms the password. Then the lip motion is tracked to recognize the password by incorporating the lip reading algorithms. The algorithm is based on the Viola-Jones method. It combines the method with a pixel-based approach to segment lips and extract features. After segmenting the lips, some special points of Region of Interests are selected. The information extracted from lips are saved in order to act as algorithm’s features. In addition, some normalizing methods are considered to normalize the features and prepare them to enter classification phase. In classification step, some known algorithms like Support Vector Machine and K-Nearest Neighbor are applied on features to recognize password and authenticate people. Visual passwords prevent key-loggers from stealing passwords. However, the mobility of a mobile user causes ambient lights to vary in different environments. In this research, a solution is designed to tackle this challenge. Finally a mobile banking application is designed and developed to run on android mobile phones platform. It incorporates a lip reader which recognizes the passwords in offline mode.  The application is independent from the internet connection or a dedicated server. The implemented recognition method has achieved a 70% success rate. In this application a video capture of a letter with 10 frames could be processed in 3.8 seconds using 628 kilo bytes of memory.  These resources are easily available in today’s mobile phones. Some mobile bank users tested the application to feedback about lip reading password. Most of them were satisfied when using it. They believed the lip reader is more trustable than text passwords and voice-based passwords. In addition, the user-friendliness of it, is a bit more than text password which means that the method can satisfies a mobile bank application user.
314 Paper A Novel Approach for Exceptional Phenomena Knowledge Detection and Analysis by Data mining hajigol yazdi elahe e abessi masood f Fakhrzad Mohammad bagher g Hoseini nasab Hasan h e Yazd univrdity f Yazd univrdity g Yazd univrdity h Yazd univrdity 1 6 2017 14 1 15 28 11 01 2015 05 10 2016 Learning logic of exceptions is a substantial challenge in data mining and knowledge discovery. Exceptional phenomena detection takes place among huge records in a database which contains a large number of normal records and a few of exceptional ones. This is important to promote the confidence to a limited number of exceptional records for effective learning. In this study, a new approach based on the abnormality theory, information and information granulation theories are presented to detect exceptions and recognize their behavioral patterns. The efficiency of the proposed method was determined by using it to detect exceptional stocks from Iran stock market in a 30-month- period and learn their exceptional behavior. The proposed Enhanced-RISE algorithm (E-RISE) as a bottom-up learning approach was implemented to extract the knowledge of normal and exceptional behavior. The extracted knowledge was utilized to design an expert system based on the proposed abnormality theory to predict new exceptions from 6022 stocks. The superior findings show the results of this proposed approach in exceptional phenomena detection, is in accordance with experts' opinions.   402 Paper An Improved View Selection Algorithm in Data Warehouses by Finding Frequent Queries Daneshpour Negin j j Shahid Rajaee Teacher Training University 1 6 2017 14 1 29 40 13 08 2015 29 10 2016 A data warehouse is a source for storing historical data to support decision making. Usually analytic queries take much time. To solve response time problem it should be materialized some views to answer all queries in minimum response time. There are many solutions for view selection problems. The most appropriate solution for view selection is materializing frequent queries. Previously posed queries on the data warehouse have profitable information. These queries probably will be used in the future. So, previous queries are clustered using clustering algorithms. Then frequent queries are found using data mining algorithms. Therefore optimal queries are found in each cluster. In the last stage optimal queries are merged to produce one (query) view for each cluster, and materializes this view. This paper proposes an algorithm for materializing frequent queries. The algorithm finds profitable views using previously posed queries on the data warehouse. These views can answer the most of the queries being posed in the future. This paper uses Index-BittableFI algorithm for finding frequent views. Using this algorithm improves previous view selection algorithms and reduces the response time. The experiments show that the proposed algorithm has %23 improvement in response time and %50 improvement in storage space.   361 Paper Application of an ANN-GA Method for Predicting the Biting Force Using Electromyogram Signals Goharian Nazanin k Moghimi Sahar l Kalani Hadi m k Ferdowsi University of Mashhad l Ferdowsi University of Mashhad m Ferdowsi University of Mashhad 1 6 2017 14 1 41 52 18 04 2015 29 10 2016 Human mastication is a common rhythmic behavior and a complex biomechanical process which is hard to reproduce. Today, investigating the relation between electrical activity of muscles and force signals is of high importance in many applications including gait analysis, orthopedics, rehabilitation, ergonomic design, haptic technology, tele-presence surgery and human-machine interaction. Surface electrodes have many advantages over force sensors which are often expensive and of massive structure, two of which are less expensive and portable. Since the biting force is too difficult to be measured, in this paper, we aim to investigate the ability of a Multi-Layer Perceptron artificial neural network (MLPANN) and Radial Basis Function artificial neural network (RBFANN) to predict the biting force of incisor teeth based on surface electromyography (EMG) signals. RBFANN and MLPANN are two of the most widely used neural network architecture. These two methods are both known as universal approximates for nonlinear input-output mapping. To do this, biting force and EMG signals from the masticatory muscles were recorded and used as output and input of neural networks, respectively. Genetic algorithm was applied to find the best structure for ANNs and the appropriate total time-delay of EMGs. Results show that the EMG signals recorded from aforementioned muscles contain useful information about the biting force. Furthermore, they indicate that MLPANN and RBFANN can detect the dynamics of the system with good precision. The mean percentage error in the training and validation phase is %2.3 and %19.4 for MLPANN and %8.3 and %22.7 for RBFANN, sequentially. Also the variance analysis technique shows that there is no significant difference between results achieved through MLPANN and RBFANN. The provided analysis will aid researchers in characterizing and investigating the mastication process, through the specification of SEMG signal patterns and the observation of the resulting biting force. Such models can provide clinical insight into the development of more effective rehabilitation therapies, and can aid in assessing the effects of an intervention. This methodology can be applied to any tele-operated robot or orthotic device (exoskeleton), either for rehabilitation or extension of human ability. 362 Paper Semi Supervised Multiple Kernel Learning using Distance Metric Learning Techniques Zare Bidoki Tahereh n Sadeghi Mohammad Taghi o Abutalebi Hamid Reza p n Yazd University o Yazd University p Yazd University 1 6 2017 14 1 53 70 23 04 2015 18 12 2016 Distance metric has a key role in many machine learning and computer vision algorithms so that choosing an appropriate distance metric has a direct effect on the performance of such algorithms. Recently, distance metric learning using labeled data or other available supervisory information has become a very active research area in machine learning applications. Studies in this area have shown that distance metric learning-based algorithms considerably outperform the commonly used distance metrics such as Euclidean distance. In the kernelized version of the metric learning algorithms, the data points are implicitly mapped into a new feature space using a non-linear kernel function. The associated distance metric is then learned in this new feature space. Utilizing kernel function improves the performance of pattern recognition algorithms, however choosing a proper kernel and tuning its parameter(s) are the main issues in such methods. Using of an appropriate composite kernel instead of a single kernel is one of the best solutions to this problem. In this research study, a multiple kernel is constructed using the weighted sum of a set of basis kernels. In this framework, we propose different learning approaches to determine the kernels weights. The proposed learning techniques arise from the distance metric learning concepts. These methods are performed within a semi supervised framework where different cost functions are considered and the learning process is performed using a limited amount of supervisory information. The supervisory information is in the form of a small set of similarity and/or dissimilarity pairs. We define four distance metric based cost functions in order to optimize the multiple kernel weight. In the first structure, the average distance between the similarity pairs is considered as the cost function. The cost function is minimized subject to maximizing of the average distance between the dissimilarity pairs.  This is in fact, a commonly used goal in the distance metric learning problem. In the next structure, it is tried to preserve the topological structure of the data by using of the idea of graph Laplacian. For this purpose, we add a penalty term to the cost function which preserves the topological structure of the data. This penalty term is also used in the other two structures. In the third arrangement, the effect of each dissimilarity pair is considered as an independent constraint. Finally, in the last structure, maximization of the distance between the dissimilarity pairs is considered within the cost function not as a constraint.  The proposed methods are examined in the clustering application using the kernel k-means clustering algorithm. Both synthetic (a XOR data set) and real data sets (the UCI data) used in the experiments and the performance of the clustering algorithm using single kernels, are considered as the baseline. Our experimental results confirm that using the multiple kernel not only improves the clustering result but also makes the algorithm independent of choosing the best kernel. The results also show that increasing of the number of constraints, as in the third structures, leads to instability of the algorithm which is expected. 342 Paper Feature reduction of hyperspectral data for increasing of class separability and preserving of data structure Imani Maryam Ghassemian Hassan Tarbiat Modares University Tarbiat Modares University 1 6 2017 14 1 71 82 07 03 2015 06 12 2016 Hyperspectral imaging with gathering hundreds spectral bands from the surface of the Earth allows us to separate materials with similar spectrum. Hyperspectral images can be used in many applications such as land chemical and physical parameter estimation, classification, target detection, unmixing, and so on. Among these applications, classification is especially interested. A hyperspectral image is a cube data containing two spatial dimensions and a spectral one. Generally, the Hughes phenomenon is occurred in the supervised classification of hyperspectral images due to the limited available labeled samples and the curse of dimensionality. So, feature reduction is an important preprocessing step for analysis and classification of hyperspectral data. Feature reduction methods are categorized into feature selection approaches and feature extraction ones. Our main focus in this paper is on feature extraction. The feature extraction methods are also divided into three main groups: supervised (with labeled samples), unsupervised (without labeled samples), and semi-supervised (with both labeled and unlabeled samples). The first group of feature extraction methods usually suffers from problems due to limited available training samples. These methods often consider the separability between classes, and so are efficient for classification applications. The second group has no need for training samples, but they often do not consider the separability between different classes and so, are not appropriate for classification. These methods are usually used for signal representation or preserving the local structure of data. The use of both labeled and unlabeled samples in the third group can increase the abilities of the feature extractor.  A feature extraction method is proposed in this paper which belongs to the third group. The proposed method increases the class separability and tries to preserves the structure of data. The proposed feature extraction method uses the ability of unlabeled samples in addition to available limited training samples to improve the classification performance. The experimental results on three real hyperspectral images show the better performance of proposed method compared to some popular feature extraction methods in terms of classification accuracy. 345 Paper Design and Hardware Implementation of a Driver Drowsiness Detection System Based on TMAS320C5505A DSP Processor Rajaeyan Ali Grailu Hadi University of Shahrood 1 6 2017 14 1 83 98 17 03 2015 24 10 2016 Every year, many people lose their lives in road traffic accidents while driving vehicles throughout the world. Providing secure driving conditions highly reduces road traffic accidents and their associated death rates. Fatigue and drowsiness are two major causes of death in these accidents; therefore, early detection of driver drowsiness can greatly reduce such accidents. Results of NTSB investigations into serious and dangerous accidents, where drivers had survived the crash, pinpointed intense driver fatigue and drowsiness as their two major causes [1]. This research study first developed a database including brain signals from ten male volunteers under certain conditions. A combination of Wavelet Transform (WT) and Support Vector Machine (SVM) classifier was then used to propose a drowsiness level detection method which used only two EEG signal channels. A hardware system was then adopted for practical implementation of the proposed method. The building blocks of this hardware system included a two-channel module for receiving and pre-processing EEG signals based on a TMS320C5509A digital signal processor. This processor was adopted in this study for the first time for detecting drowsiness level, and a real-time implementation of the SVM classifier revealed its functionality. This is a portable system backed by a battery for a 10-hour operation. Results from simulation and hardware implementation of the proposed method on ten volunteers indicated an up-to-100 percent accuracy. Works done on determining drowsiness level of drivers are two-fold: The first group uses shape and general conditions of the body with a focus on: Head movements Eye tracking Eye blink percent There are a few hardware systems developed for this group. The second group of research works use biometric signals (e.g. ECG and EEG) to detect drowsiness level in drivers [2-4]. EEG signals are the most applied biometric signals for drowsiness level determination purposed due to their low risk and high reliability [21, 28]. Accordingly, EEG Signals were used in this work for the same purpose. This research study first developed a database including brain signals from ten male volunteers under certain conditions. A combination of Wavelet Transform (WT) and Support Vector Machine (SVM) classifier was then used to propose a drowsiness level detection method which used only two EEG signal channels. A hardware system was then adopted for practical implementation of the proposed method. The building blocks of this hardware system included a two-channel module for receiving and pre-processing EEG signals based on a TMS320C5509A digital signal processor. This processor was adopted in this study for the first time for detecting drowsiness level, and a real-time implementation of the SVM classifier revealed its functionality. This is a portable system backed by a battery for a 10-hour operation. Results from simulation and hardware implementation of the proposed method on ten volunteers indicated an up-to-100 percent accuracy. A proper, valid, and accessible database with sufficient data entries plays an important role in the success rate of proposed approaches. On the other hand, available databases were either inaccessible or their data were in no good condition or were insufficient. Therefore, a new database including EEG signals of ten male volunteers with the mean age of 24 and at least two years road driving experience was first developed for the purpose of this study. EEG signals of volunteers were recorded in two alertness and drowsiness modes during driving simulation using a driving simulator and driving computer game. In most drowsiness level detection methods, more than two brain channels are usually used [20]; however, in this work, only two channels were used while maintaining the efficiency of drowsiness level determination. This made the system less cluttered for the driver, scaled down the processing workload for detecting and displaying the drowsiness level, reduced power consumption, and finally maximized the hardware system's operation time. Recorded signals were pre-processed to prepare them for the next stages including feature extraction and classification. Spectral features related to a number of bands (especially, Alpha and Theta) were the main features ever used for this purpose. So far, wavelet transform (WT) has been an important method for extracting these bands and computing their related features [7-9]. In addition, for this purpose, SVM and neural networks have been widely used as classifiers [15, 16, 18]. In this study, however, WT and the energy of some frequency bands were adopted for feature extraction whereas SVM was used for classification. Hardware-wise, very few studies have implemented their proposed approach. On the other hand, developments in applications of signal processors have raised their significance and also hope of using them in large scale processing algorithms, on a daily basis. Manufactured by Texas Instruments, TMS320C55xx family signal processors are an important and widely-used type [23]. Thanks to its low-consumption members, this family of processors is specialized for processing 1-D signals used in portable applications. Some of the main characteristics of this signal processors include low power consumption, fair prices, diverse functional peripherals (e.g. USB and McBSP), direct memory access (DMA), timer, LCD controller, supporting a number of major widely-used communication protocols, A/D converter, fast internal dual access memories, high operating frequency (typically 200 to 300 MHz), supporting dedicated signal processing instructions (such as the LMS and Viterbi algorithms), parallel execution of two commands. To the best of our knowledge, this signal processor has not been used for any drowsiness level detection applications. A major contribution of this paper was using a TMS320C5505A digital signal processor in a portable hardware system applied for drowsiness level detection of drivers. The frequency band of EEG signals usually ranges from 0.5 to 30 Hz that is partitioned into delta (0.5 to 4 Hz), theta (4 to 8 Hz), alpha (8 to 13 Hz) and beta (13 to 30 Hz) sub-bands. EEG signals' energy is raised in low frequency bands (e.g. delta and theta) during meditation, deep relaxation and the alertness-to-fatigue transition. With regards to these major sub-bands, an FIR band-pass filter with high and low cut-off frequencies set at 30 and 0.3 Hz, respectively, was designed using the windowing method. The developed hardware board had four inputs relating to two EEG signal channels (O1 and O2), a CZ reference channel and a ground signal. It had low power consumption (less than 25 mW) capable of operating for 10 hours with only two 3V CR2032 batteries. Using batteries with high A·h values would lead to longer circuit life. Signals from electrodes were pre-amplified and filtered in this board to remove noises outside the 0.5 to 30 Hz range. The electronic board designed and developed for EEG signal processing and alertness/drowsiness detection incorporated a TMS320C5509A digital signal processor made by Texas Instruments. For converting analog to digital signals, the TLV320AIC23B codec was used, and a TPS767D301 IC supplied power to the digital signal processor, both made by Texas Instruments. In the circuit's power supply section, a fuse and a Zener diode were placed consecutively in the path for supplying a 5V voltage to the power IC. These two items served as a protection circuit together. This protection circuit would automatically cut off the power once the current exceeds the 500 mA threshold, protecting the circuit against any damage. The 6.5V Zener diode prevents excessive supply of input voltage to the power IC. The power IC consisted of two inputs providing two output voltages (1.6V and 3.3V) for the switch, which distributed them throughout the circuit. The codec IC had one microphone input and one stereo input. The two received EEG signal channels entered the stereo input and exited the converter in a series arrangement. This IC included constants that should have been properly programmed before the conversion operation. This could be done by the I2C protocol using SDA and SCL pins connected to the processor. 368 Paper A GOP-Level Variable Bit Rate Control Algorithm for H.265 Video Coding Standard Fani Davoud Rezaei Mehdi Sarhaddi Avval Maryam University of Sistan and Baluchestan, Faculty of Electrical and Computer Engineering University of Sistan and Baluchestan, Faculty of Electrical and Computer Engineering University of Sistan and Baluchestan, Faculty of Electrical and Computer Engineering 1 6 2017 14 1 99 110 14 05 2015 15 06 2016 A rate control algorithm at the group of picture (GOP) level is proposed in this paper for variable bit rate applications of the H.265/HEVC video coding standard with buffer constraint. Due to structural changes in the HEVC compared to the previous standards, new rate control algorithms are needed to be designed. In the proposed algorithm, quantization parameter (QP) of each GOP is obtained by modifying QP of previous GOP according to target bit rate and buffer status. Buffer status and target bit rate are input variables selected to expand a two dimensional lookup table. Output of the lookup table is provided in a way to allow short-term variations in bit rate, in order to reach better and more uniform visual quality of reconstructed video. In addition, a QP cascading technique is used for calculating QP of frames in each GOP that operates like a bit allocation scheme and causes suitable trade-off between quality and compression rate. Unlike conventional methods, proposed scheme uses a lookup table instead of using a rate-distortion model that significantly reduces the computational complexity. Several video sequences with completely different contents were used for experiments. Some short video sequences are concatenated to attain long video sequences which are closer to variable bit rate applications.  Lookup table based (LUT) proposed algorithm is implemented on HM reference software and compared with λ-domain rate control algorithm (λ-RC) and constant QP (CQP) case that defined as anchor. In almost the same average bit rate (CQP: 1527.97, LUT: 1520.92, λ-RC: 1529.41), average QP (28.09, 28.18, 29.91) and average peak signal to noise ratio (PSNR) (37.88, 37.87, 37.76) of LUT is closer to CQP than that of λ-RC. Average values of QP standard deviation (1.13, 2.28, 4.27) and PSNR standard deviation (1.37, 2.11, 2.15) of LUT is smaller than λ-RC and closer to CQP. From rate control point of view, minimum buffering delay on average for all video sequences resulted by LUT is the same with that of λ-RC which is one of the best rate controllers proposed for the HEVC (0.94, 0.36, 0.35). Consequently, experimental results show that not only bit rate is perfectly controlled according to buffer constraints, but also the quality of reconstructed video is well maintained. 390 Paper A survey on spectral methods in spoken language identification reza shaghayegh kabudian jahanshah Amirkabir university Razi University,Kermanshah 1 6 2017 14 1 111 134 05 07 2015 29 10 2016 Identifying spoken language automatically is to identify a language from the speech signal. Language identification systems can be divided into two categories, spectral-based methods and phonetic-based methods. In the former, short-time characteristics of speech spectrum are extracted as a multi-dimensional vector. The statistical model of these features is then obtained for each language. The Gaussian mixture model is the most common statistical model in spectral-based language identification systems. On the other hand, in phonetic-based methods, speech signals are divided into a sequence of tokens using the hidden Markov model (HMM) and a language model is trained using the obtained sequence. Approaches like PRLM, PPRLM, and PR-SVM are some examples of phonetic-based methods. In research papers, usually a combination of phonetic-based and spectral-based systems are used to achieve a high quality language identification system. Spectral-based methods have been the focus of researchers, since they have no need for labeled data and usually achieve better results than phonetic approaches. Therefore, in this paper, these methods used for language identification and different spectral methods, are introduced, implemented, and compared with spoken language recognition. The basic spectral language identification method is Gaussian Mixture Model-Universal Background Model (GMM-UBM). In this paper, the MMI discrimination method is used to improve the Gaussian model of each language. Moreover, in order to model the language dynamically, GMM is replaced with the ergodic hidden Markov model (EHMM). GSV-SVM and GMM tokenizer methods are also implemented as two popular spectral approaches. In this paper, novel speaker and channel variation modeling methods are used as language identification approaches, including joint factor analysis (JFA), identity vector (i-Vector) and several variations compensation methods exploited to improve the results of i-Vector. Furthermore, in order to boost the performance of language recognition systems, different post-processing methods are applied. For post-processing, each element of raw score vector indicates the degree by which the spoken signal belongs to a language. Post-processing methods are applied to this vector as a classifier and allows making better language detection decisions by mapping the raw score vector to a space of desired languages. Different studies have employed different post-processing methods, including GMM, NN, SVM, and LLR. This study exploits several score post-processing methods to improve the quality of language recognition. The goal of the experiments in this article is to detect and distinguish Farsi, English, and Arabic, individually and simultaneously from other languages. The latter is also called open-set language identification. The signals considered in this paper include two-sided conversations, whose quality is usually not desirable due to strong noise signals, background noises of individuals or music, accents, etc. Gaussian mixture-universal model (GMM-UBM) was implemented as the basic method. In this approach, mean EER of the three target languages (Farsi, English, and Arabic) was 13.58. Experimental results indicated that training the GMM language identification system with the MMI discrimination training algorithm is more efficient than systems only trained by the ML algorithm. More specifically, the mean EER of the three target languages was reduced about 8 percent in comparison to GMM-UBM. The GMM tokenizer method was also tested as a novel spectral approach. Using this method, the mean EER of the three target languages was also about 5 percent better than GMM-UBM. In this study, the GSV-SVM discrimination method was also used for language recognition. The results of this method were considerably better than those of common spectral approaches, such that the mean EER of the three target languages was reduced by 11 percent in comparison to GMM-UBM. This study improves the low speed of this method using a model pushing method. This study also implemented two novel methods, JFA and i-Vector. According to the results, both of these methods provide better results than GMM-UBM, such that the mean EER values of the three target languages in JFA and i-Vector are respectively reduced by 1% and 12%. Generally, experimental results showed that i-Vector provides better results than other spectral language identification systems. This study is a result of a seven-year research in spoken language identification in the advanced technology development center of Khajeh Nasiredin Tousi. The ongoing research includes studying and implementing novel spectral language identification algorithms like PLDA and state-of-the-art phonetic language identification methods to combine the two spectral and phonetic systems and eventually, achieving a high quality language identification system. 399 Paper Binaural Microscopic Model Based on Modulation Filterbank for the Prediction of Speech Intelligibility in Normal-Hearing Listeners Fallah Ali Geravanchizadeh Masoud University of Tabriz University of Tabriz 1 6 2017 14 1 135 151 01 08 2015 29 10 2016 In this study, a binaural microscopic model for the prediction of speech intelligibility based on the modulation filter bank is introduced. So far, the spectral criteria such as the STI and SII or other analytical methods have been used in the binaural models to determine the binaural intelligibility. In the proposed model, unlike all models of binaural intelligibility prediction, an automatic speech recognizer (ASR) is used in the back-end as the decision unit. One advantage of using this approach is the possibility of analyzing the recognition rate of small parts of speech such as phonemes and syllables. Another advantage of this model lies in the use of pre-processing that their existence in the human auditory system has been verified. Using the proposed feature matrix in the speech recognizer, this model has good predictions in the presence of one source of stationary speech-shaped noise. Comparing the results of the proposed model with those of listening tests show high correlations and low mean absolute error values. Also, the confusion matrices of the consonants represent high correlation between predictions and measurements. The predicted speech reception threshold by the proposed model has a smaller mean absolute error (0.6 dB) than the baseline model of BSIM.