Search published articles



Mr Mohammad Sadegh Nemati Nia, , ,
Volume 12, Issue 4 (3-2016)
Abstract

Guess and determine attacks are general attacks on stream ciphers. These attacks are classified into ad-hoc and Heuristic Guess and Determine (HGD) attacks. One of the Advantages of HGD attack algorithm over ad-hoc attack is that it is designed algorithmically for a large class of stream ciphers while being powerful. In this paper, we use auxiliary polynomials in addition to the original equations as the inputs to the HGD attack on TIPSY and SNOW 1.0 stream ciphers. Based on the concept of guessed basis, the number of guesses in both HGD attack and the improved one on TIPSY is six, however the attack complexity is reduced from O(2102)to O(296). This amount is equal to that of ad-hoc attack, but the size of the guessed basis is improved from seven to six. Also, the complexity of GD attack on SNOW 1.0 of heuristic one with the guessed basis of size 6 and ad-hoc attack with the guessed basis of size 7areO(2202) and O(2224), respectively. However, the complexity and the size of guessed basis of the improved HGD attack are reduced to O(2160) and 5, respectively.


Dr. Fardin Ahmadizar, Khabat Soltanian, Dr. Fardin Akhlaghiantab,
Volume 13, Issue 1 (6-2016)
Abstract

Application of artificial neural networks (ANN) in areas such as classification of images and audio signals shows the ability of this artificial intelligence technique for solving practical problems. Construction and training of ANNs is usually a time-consuming and hard process. A suitable neural model must be able to learn the training data and also have the generalization ability. In this paper, multiple parallel populations are used for construction of ANN and evolution strategy for its training, so that in each population a particular ANN architecture is evolved. By using a bi-criteria selection method based on error and complexity of ANNs, the proposed algorithm can produce simple ANNs that have high generalization ability. To assess the performance of the algorithm, 7 benchmark classification problems have been used. It has then been compared against the existing evolutionary algorithms that train and/or construct ANNs. Experimental results show the efficiency and robustness of the proposed algorithm compared to the other methods. In this paper, the impact of parallel populations, the bi-criteria selection method, and the crossover operator on the algorithm performance has been analyzed. A key advantage of the proposed algorithm is the use of parallel computing by means of multiple populations.


, ,
Volume 13, Issue 1 (6-2016)
Abstract

In this paper, a new structure for image encryption using recursive cellular automatais presented. The image encryption contains three recursive cellular automata in three steps, individually. At the first step, the image is blocked and the pixels are substituted. In the next step, pixels are scrambledby the second cellular automata and at the last step, the blocks are attachedtogether and the pixels substitute by the third cellular automata. Due to reversibility of cellular automata, the decryption of the image is possible by doing the steps reversely. The experimental results show that the encrypted image is not comprehend visually, also this algorithmhas satisfactory performance in terms of quantitative assessment from some other schemes.


Mohsen Farhang, Hosein Bahramgiri, Hamid Dehghani,
Volume 13, Issue 2 (9-2016)
Abstract

In this paper a feature-based modulation classification algorithm is developed for discriminating PSK signals. The candidate modulation types are assumed to be QPSK, OQPSK, π/4 DQOSK and 8PSK. The proposed method applies an 8PSK baseband demodulator in order to extract required features from observed symbols. The received signal with unknown modulation type is demodulated by an 8PSK demodulator whose output is considered as a finite state machine with different states and transitions for each candidate modulation. Estimated probabilities of particular transitions constitute the discriminating features. The obtained features are given to a Bayesian classifier which decides on the modulation type of the received signal. The probability of correct classification is computed with different number of observed symbols and SNR conditions by carrying out several simulations. The results show that the proposed method offers more accurate classification compared to previous methods for classifying variants of QPSK.


Naser Hosein Gharavi, Abdorasool Mirqadri, Mohammad Abdollahi Azgomi, Sayyed Ahmad Mousavi,
Volume 15, Issue 3 (12-2018)
Abstract

Hellman’s time-memory trade-off is a probabilistic method for inverting one-way functions, using pre-computed data. Hellman introduced this method in 1980 and obtained a lower bound for the success probability of his algorithm.  After that, all further analyses of researchers are based on this lower bound.
In this paper, we first studied the expected coverage rate (ECR) of the Hellman matrices, which are constructed by a single chain. We showed that the ECR of such matrices is maximum and equal to 0.85. In this process, we find out that there exists a gap between the Hellman’s lower bound and experimental coverage rate of a Hellman matrix. Specifically, this gap is larger, when considering the Hellman matrices constructed with one single chain. So, we are investigated to obtain an accurate formula for the ECR of a Hellman matrix. Subsequently, we presented a new formula that estimate the ECR of a Hellman matrix more accurately than the Hellman’s lower bound. We showed that the given formula is closely match experimental data.
In the last, we introduced a new method to construct matrices which have much more ECR than Hellman matrices. In fact, each matrix in this new method is constructed with one single chain, which is non-repeating trajectory from a random point. So, this approach result in a number of matrices that each one contains a chain with variable length. The main advantage of this method is that we have more probability of success than Hellman method, however online time and memory requirements are increased. We have also verified theory of this new method with experimental results.
 


Dr. Hadi Soleimany, Alireza Mehrdad, Saeideh Sadeghi, Farokhlagha Moazemi,
Volume 16, Issue 4 (3-2020)
Abstract

Impossible difference attack is a powerful tool for evaluating the security of block ciphers based on finding a differential characteristic with the probability of exactly zero. The linear layer diffusion rate of a cipher plays a fundamental role in the security of the algorithm against the impossible difference attack. In this paper, we show an efficient method, which is independent of the quality of the linear layer, can find impossible differential characteristics of Zorro block cipher. In other words, using the proposed method, we show that, independent of the linear layer feature and other internal elements of the algorithm, it is possible to achieve effective impossible differential characteristic for the 9-round Zorro algorithm. Also, based on represented 9-round impossible differential characteristic, we provide a key recovery attack on reduced 10-round Zorro algorithm. In this paper, we propose a robust and different method to find impossible difference characteristics for Zorro cipher, which is independent of the linear layer of the algorithm. The main observation in this method is that the number of possible differences in that which may occur in the middle of Zorro algorithm might be very limited. This is due to the different structure of Zorro. We show how this attribute can be used to construct impossible difference characteristics. Then, using the described method, we show that, independent of the features of the algorithm elements, it is possible to achieve efficient 9-round impossible differential characteristics of Zorro cipher. It is important to note that the best impossible differential characteristics of the AES encryption algorithm are only practicable for four rounds. So the best impossible differential characteristic of Zorro cipher is far more than the best characteristic of AES, while both algorithms use an equal linear layer. Also, the analysis presented in the article, in contrast to previous analyzes, can be applied to all ciphers with the same structure as Zorro, because our analysis is independent of the internal components of the algorithm. In particular, the method presented in this paper shows that for all Zorro modified versions, there are similarly impossible differential characteristics. Zorro cipher is a block cipher algorithm with 128-bit block size and 128-bit key size. Zorro consists of 6 different sections, each with 4 rounds (24 rounds in all). Zorro does not have any subkey production algorithm and the main key is simply added to the value of the beginning state of each section using the XOR operator. Internal rounds of one section do not use the key. Similar to AES, Zorro state matrix can be shown by a 4 × 4 matrix, which each of these 16 components represent one byte. One round of Zorro, consists of four functions, which are SB*, AC, SR, and MC, respectively. The SB* function is a nonlinear function applying only to the four bytes in the first row of the state matrix. Therefore, in the opposite of the AES, where the substitution box is applied to all bytes, the Zorro substitution box only applies to four bytes. The AC operator is to add a round constant. Finally, the two SR and MC transforms are applied to the state matrix, which is, respectively, the shift row and mixed column used in the AES standard algorithm. Since the analyzes presented in this article are independent of the substitution properties, we do not use the S-box definition used by Zorro. Our proposed model uses this Zorro property that the number of possible differences after limited rounds can be much less than the total number of possible differences. In this paper, we introduce features of the Zorro, which can provide a high bound for the number of possible values of an intermediate difference. We will then present a model for how to find Zorro impossible differential characteristics, based on the limitations of the intermediate differences and using the miss-in-the-middle attack. Finally, we show that based on the proposed method, it is possible to find an impossible differential characteristic for 9 rounds of algorithms with a Zorro-like structure and regardless of the linear layer properties. Also, it is possible to apply the key recovery attack on 10 rounds of the algorithm. So, regardless of the features of the used elements, it can be shown that this number of round of algorithms is not secure even by changing the linear layer.

Ladan Riazi, Alireza Pourebrahimi, Mahmood Alborzi, Reza Radfar,
Volume 17, Issue 4 (2-2021)
Abstract

This paper presents a method for improving steganography and enhancing the security using combinatorial Meta-heuristic algorithms. The goal is to achieve an improved PSNR value in order to preserve the image quality in the steganography process.
Steganography algorithms, in order to insert message signal information inside the host data, create small changes based on the message signal in the host data, so that they are not visible to the human eye. Each cryptographic algorithm has two steps: insert a stego signal and extract it. You can use the area of the spatial or transformation area to insert the stego signal. Extraction can be done using the correlation with the original watermark or independently of it. Clearly, the choice of insertion method and how to extract are interdependent. In spatial techniques, information is stored directly in pixel color intensity but in the transform domain, the image is initially converted to another domain (such as frequency), and then the information is embedded in the conversion coefficients. Using optimization algorithms based on Metahuristic algorithms in this field is widely used and many researchers have been encouraged to use it. Using a suitable fitness function, these methods are useful in the design of steganography algorithms.
In this research, seven commonly used Metahuristic algorithms, including ant colony, bee, cuckoo search, genetics, Particle Swarm Optimization, Simulated Annealing and firefly were selected and the performance of these algorithms is evaluated individually on existing data after being applied individually.
Among the applied algorithms, cuckoo search, firefly and bee algorithms that have the best fitness function and therefore the highest quality were selected. All 6 different modes of combining these 3 algorithms were separately examined. The best combination is the firefly, bee and cuckoo search algorithms, which provides a mean signal-to-noise ratio of 54.89.
The proposed combination compared to the individual algorithms of optimization of ant colony, bee, cuckoo search, genetics, Particle Swarm Optimization, Simulated Annealing and firefly, provides 59.29, 29.61, 37.43, 52.56, 54.84, 57.82, and 3.82% improvement in the PSNR value.

Azam Soleimanian, Shahram Khazaei,
Volume 17, Issue 4 (2-2021)
Abstract

The growing amount of information that has arisen from emerging technologies has caused organizations to face challenges in maintaining and managing their information. Expanding hardware, human resources, outsourcing data management, and maintenance an external organization in the form of cloud storage services, are two common approaches to overcome these challenges; The first approach costs of the organization is only a temporary solution. By contrast, the cloud storage services approach allows the organization to pay only a small fee for the space actually in use (rather than the total reserved capacity) and always has access to the data and management tools with the most up-to-date mechanisms available. Despite the benefits of cloud storage services, security challenges arise because the organization's data is stored and managed outside of the most important organization’s supervision. One challenge is confidentiality protection of outsourced data. Data encryption before outsourcing can overcome this challenge, but common encryption schemes may fail to support various functionalities in the cloud storage service. One of the most widely used functionalities in cloud storage services is secure keyword search on the encrypted documents collection. Searchable encryption schemes, enable users to securely search over encrypted data. Based on the users’ needs, derivatives of this functionality have recently been considered by researchers. One of these derivatives is ranked search that allows the server to extract results based on their similarity to the searched keyword. This functionality reduces the communication overheads between the cloud server and the owner organization, as well as the response time for the search. In this paper, we focus on the ranked symmetric searchable encryption schemes. In this regard, we review structures proposed in the symmetric searchable encryption schemes, and show that these two data structures have capabilities beyond their original design goal. More precisely, we show that by making the data structures, it is possible to support secure ranked search efficiently. In addition, by small changes on these data, we present two ranked symmetric searchable encryption schemes for single keyword search and Boolean structures which introduced-keyword search based on the data.

Amir Jalaly Bidgoly, Abbas Dehghani,
Volume 18, Issue 1 (5-2021)
Abstract

LPWANs are a class of technologies that have very low power consumption and high range of communication. Along with its various advantages, these technologies also have many limitations, such as low bandwidth, connectionless transmission and low processing power, which has challenged encryption methods in this technologies. One of the most important of these challenges is encryption. The very small size of the message and the possibility of packet loss without the gateway or device awareness, make any of the cipher chaining methods such as CBC, OFB or CTC impossible in LPWANs, because either they assume a connection oriented media or consume part of the payload for sending counter or HMAC. In this paper, we propose a new way to re-synchronize the key between sender and receiver in the event of a packet being lost that will enable us to perform cipher chaining encryption in LPWAN limitation. The paper provides two encryption synchronization methods for LPWANs. The first method can be synchronized in a similar behavior as the proof of work in the block chain. The second proposed method is able to synchronize the sender and receiver with the least possible used space of the message payload. The proposed method is able to synchronize the parties without using the payload. The proposed method is implemented in the Sigfox platform and then simulated in a sample application. The simulation results show that the proposed method is acceptable in environments where the probability of missing several consecutive packets is low.

Hamid Darabian, Sattar Hashemi, Sajad Homayoon, Karamollah Bagherifard,
Volume 18, Issue 3 (12-2021)
Abstract

Nowadays, crypto-ransomware is considered as one of the most threats in cybersecurity. Crypto ransomware removes data access by encrypting valuable data and requests a ransom payment to allow data decryption. The number of Crypto ransomware variants has increased rapidly every year, and ransomware needs to be distinguished from the goodware types and other types of ransomware to protect users' machines from ransomware-based attacks. Most published works considered System File and process behavior to identify ransomware which depend on how quickly and accurately system logs can be obtained and mined to detect abnormalities. Due to the severity of irreparable damage of ransomware attacks, timely detection of ransomware is of great importance. This paper focuses on the early detection of ransomware samples by analyzing behavioral logs of programs executing on the operating system before the malicious program destroy all the files. Sequential Pattern Mining is utilized to find Maximal Sequential Patterns of activities within different ransomware families as candidate features for classification. First, we prepare our test environment to execute and collect activity logs of 572 TeslaCrypt samples, 535 Cerber ransomware, and 517 Locky ransomware samples. Our testbed has the capability to be used in other projects where the automatic execution of malware samples is essential. Then, we extracted valuable features from the output of the Sequence Mining technique to train a classification algorithm for detecting ransomware samples. 99% accuracy in detecting ransomware instances from benign samples and 96.5% accuracy in detecting family of a given ransomware sample proves the usefulness and practicality of our proposed methods in detecting ransomware samples.

Javad Alizadeh, Nasour Bagheri,
Volume 19, Issue 4 (3-2023)
Abstract

Over the last years, the concept of Internet of Things (IoT) leads to a revolution in the communications of humans and things. Security and efficiency could be the main challenges of that communication‎‎. ‎‎On the other hand, authenticity and confidentiality are two important goals to provide desired security in an information system, including IoT-based applications. An Authentication and Key Agreement (AKA) protocol is a tool to achieve authenticity and agree on a secret key to reach confidentiality. Therefor using a secure AKA protocol, one can establish the mentioned security. In the last years, several articles have discussed AKA protocols in the WSN. For example, in 2014, Turkanovic et al. proposed a new AKA scheme for the heterogeneous ad-hoc WSN. In 2016, Sabzinejad et al. presented an improved one. In 2017, Jiang et al. introduced a secure AKA protocol. Some other AKA protocols have presented in the last three years. All the mentioned protocols are lightweight ones and need minimum resources and try to decrease the computation and communication costs in the WSN context.
In 2019, Janababaei et al. proposed an AKA scheme in the WSN for the IoT applications, in the journal of Signal and Data Processing (JSDP). In the context of efficiency, the protocol only uses a hash function, bitwise XOR, and concatenation operation. Hence, it can be  considered as a lightweight protocol. The authors also discussed the security of their scheme and claimed that the proposed protocol has the capability  to offer anonymity and trust and is secure against traceability, impersonation, reply and man in the middle attacks. However, despite their claims, this research highlights some vulnerabilities in that protocol, for the first time to the best of our knowledge. More precisely, we showe that a malicious sensor node can find the secret parameters of another sensor node when it establishes a session with the victimized sensor. Besides, an adversary can determine any session key of two sensor nodes, given only a known session key of them. We also show that the protocol could not satisfy the anonymity of the sensor nodes. Other attacks which influence the Janababaei et al.’s scheme, are impersonation attack on the sensor nodes and cluster heads and also the man in the middle attack.
In this paper we find that the main weaknesses of the Janababaei et al.’s protocol are related to computation of the session key, . We also propose a simple remedy to enhance the security of the Janababaei et al.’s protocol. An initial attempt to improve the protocol is using a hash function on the calculated key, . This suggestion is presented to enhance the security of the protocol against the observed weaknesses in this paper; but it does not mean that there are no other security issues in the protocol. Therefore, modification and improvement of the Janababaei et al.’s protocol such that it provides other security features can be considered in the future research of this paper. Besides, since in this paper we focus on the security of the protocol, then the efficiency of it was not discussed. Therefore one can consider the modification of the message structure of the protocol to reduce the computational and telecommunication costs of it as another future work in the context of this paper.
Engineer Ali Mohammad Norouzzadeh Gilmolk, Doctor Mohammad Reza Aref, Doctor Reza Ramazani Khorshidoust,
Volume 19, Issue 4 (3-2023)
Abstract

Nowadays, achieving desirable and stable security in networks with national and organizational scope and even in sensitive information systems, should be based on a systematic and comprehensive method and should be done step by step. Cryptography is the most important mechanism for securing information. a cryptographic system consists of three main components: cryptographic algorithms, cryptographic keys, and security protocols, which are mainly based on cryptographic algorithms. In designing a cryptographic algorithm, all the necessary components of information security must be considered in a model of excellence in technical, organizational, procedural and human aspects. To meet these needs, we must first extract the effective components in the design and implementation of cryptographic algorithms based on a model and then determine the impact of the components. In this paper, we use cybernetic methodology to prepare a   metamodel.
 
The cryptographic cybernetics metamodel has four components: " strategy / policy ", "main process", "support process" and "control process". The "main process" has four stages and also, the "suport process" includes 13 components of hardware and software. The interactions of these two processes shape its structure, leading to a complex graph. To prioritize suport components for resource allocation and cryptography strategy, it is necessary to rank these components in the designed metamodel. To overcome this complexity in order to rank the support components, we use the ELECTRE III method, which is a multi-criteria decision-making method. The results show that the components with high priority for the development of the cryptographic system are: Research and Development, Human Resources, Management, Organizational, Information and Communication Technology, Rrules and Regulations and standards. These results are consistent with reports published by the ITU in 2015, 2017 and 2018.

Zeinab Haj-Hosseini, Mohammad-Ali Doostari, Hamed Yusefi,
Volume 21, Issue 2 (10-2024)
Abstract

In recent years, embedded systems have continuously gained importance. This ubiquity is accompanied by an increased need for embedded security. Cryptography can address these security requirements. Many symmetric and asymmetric algorithms, such as AES, DES, RSA, ElGamal, and ECC, have been implemented on embedded devices.
All frequently implemented public-key cryptosystems rely on the presumed hardness of either factoring the product of two large primes (FP) or computing discrete logarithms (DLP). These two problems are closely related. Therefore, solving these problems would have significant ramifications for classical public-key cryptography and, consequently, for all embedded devices that utilize these algorithms.
Currently, both problems are believed to be computationally infeasible with a conventional computer. However, a quantum computer capable of performing computations on a few thousand qubits could solve both problems using Shor's algorithm[1]. Although a quantum computer of this scale has not been reported, it could become a reality within the next one to three decades. Consequently, the development and cryptanalysis of alternative post-quantum cryptosystems are crucial. Post-quantum cryptosystems refer to cryptosystems that are not susceptible to the critical security loss or complete compromise caused by quantum computers.
One of the major security challenges is the development of quantum computers and the potential compromise of current cryptosystems in the future. Therefore, it is essential to consider post-quantum cryptosystem algorithms and the challenges of implementing and attacking them. Post-quantum cryptosystems encompass various types, including hash-based cryptography, multivariate-quadratic-equations cryptography, lattice-based cryptography, and code-based cryptography. In this study, our focus is on the QC-MDPC McEliece code-based algorithm. Post-quantum public keys must be designed to gain popularity in practice; they should be optimized for implementation and efficient in execution. McEliece encryption and decryption do not require computationally expensive processing, making it more suitable for implementation[2].
One of the implementation challenges for these algorithms is the large key length, which poses an important issue for implementation on embedded systems. Additionally, countering side-channel attacks caused by information leakage from hardware equipment is crucial. We have addressed this by reducing the key length from 1200 bytes to 180 bytes, providing 80-bit security, and introducing a new method for implementing the QC-MDPC McEliece cryptosystem. Differential power analysis attacks (DPA) exploit the relationship between power consumption and intermediate data to recover the key. In this study, we have used a masking technique for multiplication in the finite field in the syndrome computation part of the decryption algorithm. We have implemented the Threshold Implementation (TI) masking countermeasure for DPA to eliminate information leaks from the previous implementation.


Page 1 from 1     

© 2015 All Rights Reserved | Signal and Data Processing