Latest News
    Journal of Information and Communication Technology ( Scientific )
  • OpenAccess
  • About the journal

    According to the letter No. 3/4817 dated 2007/9/02 of the Research Affairs Office of the Iranian Ministry of Science, Research and Technology and the statement of the vote of the Commission for the Review of Scientific Journals of the country on 2007/7/14, this journal has been awarded a scientific-research degree.

    Journal of Information and Communication Technology (JICT) belongs to the "Iranian Information and Communication Technology Association" and is a peer-reviewed open-access bi-quarterly journal publishing high-quality scientific papers covering all aspects of the information and communication technologies including computer science and engineering, information technology, communication technology, information technology management. It is an interdisciplinary journal devoted to the publication of original articles, review articles, etc., considering the research ethics and academic rules and regulations.
    All papers are subject to a blind reviewing process. The submitted manuscripts will be published after a thorough review and the editorial board's approval. 


    Recent Articles

    • Open Access Article

      1 - Design and implementation of a survival model for patients with melanoma based on data mining algorithms
      farinaz sanaei Seyed Abdollah  Amin Mousavi Abbas Toloie Eshlaghy ali rajabzadeh ghotri
      Issue 57 , Vol. 15 , Autumn 2023
      Background/Purpose: Among the most commonly diagnosed cancers, melanoma is the second leading cause of cancer-related death. A growing number of people are becoming victims of melanoma. Melanoma is also the most malignant and rare form of skin cancer. Advanced cases of More
      Background/Purpose: Among the most commonly diagnosed cancers, melanoma is the second leading cause of cancer-related death. A growing number of people are becoming victims of melanoma. Melanoma is also the most malignant and rare form of skin cancer. Advanced cases of the disease may cause death due to the spread of the disease to internal organs. The National Cancer Institute reported that approximately 99,780 people were diagnosed with melanoma in 2022, and approximately 7,650 died. Therefore, this study aims to develop an optimization algorithm for predicting melanoma patients' survival. Methodology: This applied research was a descriptive-analytical and retrospective study. The study population included patients with melanoma cancer identified from the National Cancer Research Center at Shahid Beheshti University between 2008 and 2013, with a follow-up period of five years. An optimization model was selected for melanoma survival prognosis based on the evaluation metrics of data mining algorithms. Findings: A neural network algorithm, a Naïve Bayes network, a Bayesian network, a combination of decision tree and Naïve Bayes network, logistic regression, J48, and ID3 were selected as the models used in the national database. Statistically, the studied neural network outperformed other selected algorithms in all evaluation metrics. Conclusion: The results of the present study showed that the neural network with a value of 0.97 has optimal performance in terms of reliability. Therefore, the predictive model of melanoma survival showed a better performance both in terms of discrimination power and reliability. Therefore, this algorithm was proposed as a melanoma survival prediction model. Manuscript profile

    • Open Access Article

      2 - An Intrusion Detection System based on Deep Learning for CAN Bus
      Fatemeh Asghariyan Mohsen Raji
      Issue 57 , Vol. 15 , Autumn 2023
      In recent years, with the advancement of automotive electronics and the development of modern vehicles with the help of embedded systems and portable equipment, in-vehicle networks such as the controller area network (CAN) have faced new security risks. Since the CAN bu More
      In recent years, with the advancement of automotive electronics and the development of modern vehicles with the help of embedded systems and portable equipment, in-vehicle networks such as the controller area network (CAN) have faced new security risks. Since the CAN bus lacks security systems such as authentication and encryption to deal with cyber-attacks, the need for an intrusion detection system to detect attacks on the CAN bus seem to be very necessary. In this paper, a deep adversarial neural network (DACNN) is proposed to detect various types of security intrusions in CAN buses. For this purpose, the DACNN method, which is an extension of the CNN method using adversarial learning, detects intrusion in three stages; In the first stage, CNN acts as a feature descriptor and the main features are extracted, and in the second stage, the discriminating classifier classifies these features and finally, the intrusion is detected using the adversarial learning. In order to show the efficiency of the proposed method, a real open source dataset was used in which the CAN network traffic on a real vehicle during message injection attacks is recorded on a real vehicle. The obtained results show that the proposed method performs better than other machine learning methods in terms of false negative rate and error rate, which is less than 0.1% for DoS and drive gear forgery attack and RPM forgery attack while this rate is less than 0.5% for fuzzy attack. Manuscript profile

    • Open Access Article

      3 - The framework of the national macro plan for transparency and information release based on the grounded theory method
      Mahdi Azizi MehmanDoost Mohammad Reza Hosseini reza taghipour Mojtaba Mazoochi
      Issue 57 , Vol. 15 , Autumn 2023
      The purpose of this research is to present the framework of the national plan for transparency and information release. The research employs an integrated approach (qualitative and quantitative) and grounded theory as its research methodology. In the qualitative part، w More
      The purpose of this research is to present the framework of the national plan for transparency and information release. The research employs an integrated approach (qualitative and quantitative) and grounded theory as its research methodology. In the qualitative part، with an in-depth and exploratory review of upstream laws and documents، models، theories، plans، and white papers of different countries related to transparency and information release، data analysis was done until theoretical saturation through three stages of open، axial، and selective coding. To acquire the dimensions، components، and subcomponents of this framework، 129 concepts were extracted from 620 primary codes، which were reduced to 593 secondary codes by removing the duplicated elements. Finally، 24 subcategories were placed under the five main components based on the paradigm model. In the quantitative section، the results of the analysis of the questionnaire indicated that، from a validity standpoint، the total value of the questionnaire، in different dimensions، was between 0.87 and 0.92، and the reliability coefficient was between 0.73 and 0.78. Based on data analysis، the establishment of a supranational management institution for transparency and information release، the precise determination of exceptions، network governance، demanding transparency، adherence to frameworks، maximum disclosure and support for legitimate disclosure، and the establishment of a data governance center are among the subcategories emphasized in this framework. Manuscript profile

    • Open Access Article

      4 - Drivers, Obstacles and consequences of digital entrepreneurship in Iran's road freight transportation industry
      Azam sadtat Mortazavi kahangi Parviz Saketi Javad Mehrabi
      Issue 57 , Vol. 15 , Autumn 2023
      The purpose of this research is to identify the drivers, obstacles and consequences of digital entrepreneurship in Iran's road freight transportation industry. The statistical society of this research in the qualitative part was made up of 20 experts in this field who w More
      The purpose of this research is to identify the drivers, obstacles and consequences of digital entrepreneurship in Iran's road freight transportation industry. The statistical society of this research in the qualitative part was made up of 20 experts in this field who were selected using theoretical saturation. In the quantitative part, using Cochran's formula and cluster sampling method, 170 employees of this industry were selected as samples. In order to collect data, a semi-structured interview was used in the qualitative part and a researcher-made questionnaire was used in the quantitative part, whose validity and reliability were checked and confirmed. In the data analysis, systematic literature review and coding and Maxqda software were used in the qualitative part, and inferential statistics and SPSS and Lisrel software were used in the quantitative part. Finally, 9 indicators in 4 driver factors, 11 indicators in 3 obstacle factors and 55 indicators in 8 consequence categories were extracted and prioritized using factor analysis. The result of this research shows that the political component is a priority as a driver and political obstacles are a priority as an obstacle. Therefore, the role of the government in this field is very important. Manuscript profile

    • Open Access Article

      5 - WSTMOS: A Method For Optimizing Throughput, Energy, And Latency In Cloud Workflow Scheduling
      Arash Ghorbannia Delavar Reza Akraminejad sahar mozafari
      Issue 57 , Vol. 15 , Autumn 2023
      Application of cloud computing in different datacenters around the world has led to generation of more co2 gas. In addition, energy and throughput are the two most important issues in this field. This paper has presented an energy and throughput-aware algorithm for sche More
      Application of cloud computing in different datacenters around the world has led to generation of more co2 gas. In addition, energy and throughput are the two most important issues in this field. This paper has presented an energy and throughput-aware algorithm for scheduling of compressed-instance workflows in things-internet by cluster processing in cloud. A method is presented for scheduling cloud workflows with aim of optimizing energy, throughput, and latency. In the proposed method, time and energy consumption has been improved in comparison to previous methods by creating distance parameters, clustering inputs, and considering real execution time. In WSTMOS method by considering special parameters and real execution time, we managed to reach the optimized objective function. Moreover, in the proposed method parameter of time distance of tasks to virtual machines for reduction of number of migration in virtual machines was applied. In WSTMOS method by organizing the workflow inputs to low, medium and heavy groups and also by distributing appropriate load on more suitable servers for processors threshold, we accomplished to optimize energy and cost. Energy consumption was reduced by 4.8 percent while the cost was cut down by 4.4 percent using this method in comparison to studied method. Finally, average delay time, power and workload are optimized in comparison to previous methods. Manuscript profile

    • Open Access Article

      6 - Presenting a web recommender system for user nose pages using DBSCAN clustering algorithm and machine learning SVM method.
      reza molaee fard Mohammad mosleh
      Issue 57 , Vol. 15 , Autumn 2023
      Recommender systems can predict future user requests and then generate a list of the user's favorite pages. In other words, recommender systems can obtain an accurate profile of users' behavior and predict the page that the user will choose in the next move, which can s More
      Recommender systems can predict future user requests and then generate a list of the user's favorite pages. In other words, recommender systems can obtain an accurate profile of users' behavior and predict the page that the user will choose in the next move, which can solve the problem of the cold start of the system and improve the quality of the search. In this research, a new method is presented in order to improve recommender systems in the field of the web, which uses the DBSCAN clustering algorithm to cluster data, and this algorithm obtained an efficiency score of 99%. Then, using the Page rank algorithm, the user's favorite pages are weighted. Then, using the SVM method, we categorize the data and give the user a combined recommender system to generate predictions, and finally, this recommender system will provide the user with a list of pages that may be of interest to the user. The evaluation of the results of the research indicated that the use of this proposed method can achieve a score of 95% in the recall section and a score of 99% in the accuracy section, which proves that this recommender system can reach more than 90%. It detects the user's intended pages correctly and solves the weaknesses of other previous systems to a large extent. Manuscript profile

    • Open Access Article

      7 - Face recognition and Liveness Detection Based on Speech Recognition for Electronical Authentication
      Ahmad dolatkhah Behnam Dorostkar Yaghouti raheb hashempour
      Issue 57 , Vol. 15 , Autumn 2023
      As technology develops, institutions and organizations provide many services electronically and intelligently over the Internet. The police, as an institution that provides services to people and other institutions, aims to make its services smarter. Various electronic More
      As technology develops, institutions and organizations provide many services electronically and intelligently over the Internet. The police, as an institution that provides services to people and other institutions, aims to make its services smarter. Various electronic and intelligent systems have been offered in this regard. Because these systems lack authentication, many services that can be provided online require a visit to +10 police stations. Budget and equipment limitations for face-to-face responses, limitations of the police force and their focus on essential issues, a lack of service offices in villages and a limited number of service offices in cities, and the growing demand for online services, especially in crisis situations like Corona disease, electronic authentication is becoming increasingly important. This article reviews electronic authentication and its necessity, liveness detection methods and face recognition which are two of the most important technologies in this area. In the following, we present an efficient method of face recognition using deep learning models for face matching, as well as an interactive liveness detection method based on Persian speech recognition. A final section of the paper presents the results of testing these models on relevant data from this field. Manuscript profile

    • Open Access Article

      8 - Estimating the Value of Digital Economy Core Spillover in Iran
      Niloufar Moradhassel Bita Mohebikhah
      Issue 57 , Vol. 15 , Autumn 2023
      Background and Purpose: In most of the studies, the direct effects of the ICT sector have been discussed, but the indirect effects (spillover) and how to measure them have not been addressed. This issue is on the agenda of this article. For this purpose, while determini More
      Background and Purpose: In most of the studies, the direct effects of the ICT sector have been discussed, but the indirect effects (spillover) and how to measure them have not been addressed. This issue is on the agenda of this article. For this purpose, while determining the territory of the digital economy, the gross value of the core of the country's digital economy has been estimated. Methodology: In this article, using the Solow growth model, the spillover effects of the core of the digital economy (ICT) have been estimated for the period of 2002-2019. Findings: The results imply that in the period under review, according to the elasticity of labor productivity relative to the share of net capital formation of the ICT sector in the national economy (about 0.3), the spillover effects of the digital economy core have increased from 210 thousand billion Rials in 2015 to 279 thousand billion of Rials in 2019. Manuscript profile

    • Open Access Article

      9 - Applying deep learning for improving the results of sentiment analysis of Persian comments of Online retail stores
      faezeh forootan Mohammad Rabiei
      Issue 57 , Vol. 15 , Autumn 2023
      چكيده انگليسيThe retail market industry is one of the industries that affects the economies of countries, the life of which depends on the level of satisfaction and trust of customers to buy from these markets. In such a situation, the retail market industry is trying t More
      چكيده انگليسيThe retail market industry is one of the industries that affects the economies of countries, the life of which depends on the level of satisfaction and trust of customers to buy from these markets. In such a situation, the retail market industry is trying to provide conditions for customer feedback and interaction with retailers based on web pages and online platforms. Because the analysis of published opinions play a role not only in determining customer satisfaction but also in improving products. Therefore, in recent years, sentiment analysis techniques in order to analyze and summarize opinions, has been considered by researchers in various fields, especially the retail market industry. Manuscript profile

    • Open Access Article

      10 - Artefacts and Producers Mapping of Iran's Artificial Intelligence Ecosystem based on Transformational Levels
      hamed ojaghi Iman Zohoorian Nadali Fatemeh Soleymani Roozbahani
      Issue 57 , Vol. 15 , Autumn 2023
      As an emerging technological field, artificial intelligence has received increasing attention from companies and governments. The development of artificial intelligence both at business and country levels depends on knowing the current situation. This paper identifies t More
      As an emerging technological field, artificial intelligence has received increasing attention from companies and governments. The development of artificial intelligence both at business and country levels depends on knowing the current situation. This paper identifies the artifacts and producers presented in this field and maps them to transformational levels. Products/services and producers are achieved through capabilities provided by artificial intelligence. Then, based on the classification methodology and meta-characteristics, the transformational levels of the artifacts of Iran's artificial intelligence ecosystem have been extracted. 562 products/services were identified, which were offered by 112 companies. Machine vision and natural language processing have been at the top of the technologies used, with 44 and 27 percent of the products allocated to them, respectively. Artifacts and producers were classified into seven transformative levels: individual, organization, industry, electronic chip/hardware, society, platform, code/algorithm/library, and infrastructure. Iran's artificial intelligence productions have not grown in a balanced way. The three levels of platform, code/algorithm/library, and infrastructure as the main generator of other artificial intelligence products/services have had the lowest amount of production. It is suggested that a specialized marketplace for the supply of artificial intelligence application programming interfaces should be put on the agenda to stimulate the formation of the ecosystem. Manuscript profile

    • Open Access Article

      11 - Noor Analysis: A Benchmark Dataset for Evaluating Morphological Analysis Engines
      Huda Al-Shohayyeb Behrooz Minaei Mohammad Ebrahim Shenassa Sayyed Ali Hossayni
      Issue 57 , Vol. 15 , Autumn 2023
      The Arabic language has a very rich and complex morphology, which is very useful for the analysis of the Arabic language, especially in traditional Arabic texts such as historical and religious texts, and helps in understanding the meaning of the texts. In the morpholog More
      The Arabic language has a very rich and complex morphology, which is very useful for the analysis of the Arabic language, especially in traditional Arabic texts such as historical and religious texts, and helps in understanding the meaning of the texts. In the morphological data set, the variety of labels and the number of data samples helps to evaluate the morphological methods, in this research, the morphological dataset that we present includes about 22, 3690 words from the book of Sharia alـIslam, which have been labeled by experts, and this dataset is the largest in terms of volume and The variety of labels is superior to other data provided for Arabic morphological analysis. To evaluate the data, we applied the Farasa system to the texts and we report the annotation quality through four evaluation on the Farasa system. Manuscript profile

    • Open Access Article

      12 - Improvement of intrusion detection system on Industrial Internet of Things based on deep learning using metaheuristic algorithms
      mohammadreza zeraatkarmoghaddam majid ghayori
      Issue 57 , Vol. 15 , Autumn 2023
      Due to the increasing use of industrial Internet of Things (IIoT) systems, one of the most widely used security mechanisms is intrusion detection system (IDS) in the IIoT. In these systems, deep learning techniques are increasingly used to detect attacks, anomalies or i More
      Due to the increasing use of industrial Internet of Things (IIoT) systems, one of the most widely used security mechanisms is intrusion detection system (IDS) in the IIoT. In these systems, deep learning techniques are increasingly used to detect attacks, anomalies or intrusions. In deep learning, the most important challenge for training neural networks is determining the hyperparameters in these networks. To overcome this challenge, we have presented a hybrid approach to automate hyperparameter tuning in deep learning architecture by eliminating the human factor. In this article, an IDS in IIoT based on convolutional neural networks (CNN) and recurrent neural network based on short-term memory (LSTM) using metaheuristic algorithms of particle swarm optimization (PSO) and Whale (WOA) is used. This system uses a hybrid method based on neural networks and metaheuristic algorithms to improve neural network performance and increase detection rate and reduce neural network training time. In our method, considering the PSO-WOA algorithm, the hyperparameters of the neural network are determined automatically without the intervention of human agent. In this paper, UNSW-NB15 dataset is used for training and testing. In this research, the PSO-WOA algorithm has use optimized the hyperparameters of the neural network by limiting the search space, and the CNN-LSTM neural network has been trained with this the determined hyperparameters. The results of the implementation indicate that in addition to automating the determination of hyperparameters of the neural network, the detection rate of are method improve 98.5, which is a good improvement compared to other methods. Manuscript profile

    • Open Access Article

      13 - Improving the load balancing in Cloud computing using a rapid SFL algorithm (R-SFLA)
      Kiomars Salimi Mahdi Mollamotalebi
      Issue 57 , Vol. 15 , Autumn 2023
      Nowadays, Cloud computing has many applications due to various services. On the other hand, due to rapid growth, resource constraints and final costs, Cloud computing faces with several challenges such as load balancing. The purpose of load balancing is management of th More
      Nowadays, Cloud computing has many applications due to various services. On the other hand, due to rapid growth, resource constraints and final costs, Cloud computing faces with several challenges such as load balancing. The purpose of load balancing is management of the load distribution among the processing nodes in order to have the best usage of resources while having minimum response time for the users’ requests. Several methods for load balancing in Cloud computing have been proposed in the literature. The shuffled frog leaping algorithm for load balancing is a dynamic, evolutionary, and inspired by nature. This paper proposed a modified rapid shuffled frog leaping algorithm (R-SFLA) that converge the defective evolution of frogs rapidly. In order to evaluate the performance of R-SFLA, it is compared to Shuffled Frog Leaping Algorithm (SFLA) and Augmented Shuffled Frog Leaping Algorithm (ASFLA) by the overall execution cost, Makespan, response time, and degree of imbalance. The simulation is performed in CloudSim, and the results obtained from the experiments indicated that the proposed algorithm acts more efficient compared to other methods based on the above mentioned factors. Manuscript profile

    • Open Access Article

      14 - Evaluation of Interpolation Methods for Estimating the Fading Channels in Digital TV Broadcasting
      Ali Pouladsadeh Mohammadali Sebghati
      Issue 57 , Vol. 15 , Autumn 2023
      Variations in telecommunication channels is a challenge of the wireless communication which makes the channel estimation and equalization a noteworthy issue. In OFDM systems, some subcarriers can be considered as pilots for channel estimation. In the pilot-aided channel More
      Variations in telecommunication channels is a challenge of the wireless communication which makes the channel estimation and equalization a noteworthy issue. In OFDM systems, some subcarriers can be considered as pilots for channel estimation. In the pilot-aided channel estimation procedure, interpolation is an essential step to achieve channel response in data subcarriers. Choosing the best interpolation method has been the subject of various researches, because there is no interpolator as the best method in all conditions, and their performance depends on the fading model, signal-to-noise ratio and pilot overhead ratio. In this paper, the effect of different interpolation methods on the quality of DVB-T2 broadcast links is evaluated. A simulation platform is prepared in which different channel models are defined according to the real-world measurements. The interpolation is performed by five widely-used methods (nearest neighbor, linear, cubic, spline, and Makima) for different pilot ratios. After channel equalization by the results of the interpolator, the bit error rate is calculated as the main criterion for evaluation and comparison. The rules of selecting the appropriate interpolator in different conditions is presented. It is generally concluded that for fading scenarios close to flat fading or high pilot overhead ratio, the simple interpolators such as linear interpolator are proper choices. But in harsh conditions, i.e. severe frequency-selective fading channels or low pilot overhead ratio, the more complicated interpolators such as cubic and spline methods yield better results. The amount of improvements and differences are quantified in this study. Manuscript profile

    • Open Access Article

      15 - A framework for architecture the electronic trust in e-commerce:online shopping segment
      amir Mohtarami Akbar amini
      Issue 57 , Vol. 15 , Autumn 2023
      Today, e-commerce is rapidly expanding as a way of doing business in the modern world due to its advantages and benefits. The purpose of this study is to extract dimensions and criteria for providing electronic trust arrangements in B2C services, improving the internal More
      Today, e-commerce is rapidly expanding as a way of doing business in the modern world due to its advantages and benefits. The purpose of this study is to extract dimensions and criteria for providing electronic trust arrangements in B2C services, improving the internal processes of the business environment, and also determining the importance and priority of each criterion to ensure electronic trust in order to gain the trust and satisfaction of the customer. A mixed method of research is employed includes: litrature review, field study and opinion gathering alongside of statistical techniques. The statistical population includes all expert customers of online stores in the city of Tehran, among which random sampling has been done. Questions in the context of the electronic trust and provision of e-business services and their priority in relation to each other, are discussed through inferential statistics. The results of the data analysis show that there is a meaningful relationship between the 12 criteria identified and customer's trust. The results obtained in the context of the conceptual framework show the impact of three dimensions of psychological, technical and legal, according to the criteria and indicators of electronic trust Manuscript profile

    • Open Access Article

      16 - Improving IoT resource management using fog calculations and ant lion optimization algorithm
      payam shams Seyedeh Leili Mirtaheri reza shahbazian ehsan arianyan
      Issue 57 , Vol. 15 , Autumn 2023
      In this paper, a model based on meta-heuristic algorithms for optimal allocation of IoT resources based on fog calculations is proposed. In the proposed model, the user request is first given to the system as a workflow; For each request, the resource requirements (proc More
      In this paper, a model based on meta-heuristic algorithms for optimal allocation of IoT resources based on fog calculations is proposed. In the proposed model, the user request is first given to the system as a workflow; For each request, the resource requirements (processing power, storage memory, and bandwidth) are first extracted. This component determines the requested traffic status of the application in terms of real-time. If the application is not real-time and is somewhat resistant to latency, the request will be referred to the cloud environment, but if the application needs to respond promptly and is sensitive to latency, it will be dealt with as a fog calculation. It will be written to one of the Cloudletes. In this step, in order to select the best solution in allocating resources to serve the users of the IoT environment, the ant milk optimization algorithm was used. The proposed method is simulated in MATLAB software environment and to evaluate its performance, five indicators of fog cells energy consumption, response time, fog cell imbalance, latency and bandwidth have been used. The results show that the proposed method reduces the energy consumption, latency rate in fog cells, bandwidth consumption rate, load balance rate and response time compared to the base design (ROUTER) 22, 18, 12, 22 and 47, respectively. Percentage has improved. Manuscript profile

    • Open Access Article

      17 - The effect of emotional intelligence of project managers on the effectiveness of team communication in Iranian research institutes (Case study: Research Institute of Communication and Information Technology)
      Mansoureh  Mohammadnezhad Fadard Ehram Safari
      Issue 57 , Vol. 15 , Autumn 2023
      Generally, in performing technical projects, especially in the field of information and communication technology, the most important criterion for handing over the project is having technical capabilities, and less attention is paid to the communication skills of projec More
      Generally, in performing technical projects, especially in the field of information and communication technology, the most important criterion for handing over the project is having technical capabilities, and less attention is paid to the communication skills of project managers, such as having emotional intelligence. Lack of attention to this issue seems to reduce the effectiveness of team communication and thus lead to project failure. The aim of this study was to measure the effect of emotional intelligence of project managers on the effectiveness of team communication in the projects of the Institute of Communication and Information Technology. The method of the present research is descriptive-analytical of correlation type, the statistical population of which consists of project managers and members of the project teams of the Research Institute of Communication and Information Technology. The statistical population includes 19 project teams that have been selected by census method. Data collection tools are Bar-On Emotional Intelligence Questionnaire and Senior Questionnaire to evaluate the effectiveness of project team communication. Pearson correlation coefficient, multivariate regression and imaginary variable regression and dependent t-test were used to analyze the data. The results show that the emotional intelligence of project managers affects effective communication in the project team. However, only interpersonal skills, interpersonal skills, and adaptability can predict effective communication within the project team, and the dimensions of general mood and stress management do not affect these relationships Manuscript profile

    • Open Access Article

      18 - Energy procurement of a cellular base station in independent microgrids with electric vehicles and renewable energy sources: Mixed-integer nonlinear programming model
      Reza Bahri saeed zeynali
      Issue 57 , Vol. 15 , Autumn 2023
      The cellular base stations are communication devices that ensure the connection in the world. Nevertheless, they are usually installed in remote places. This paper, studied the energy procurement of a cellular base stations in an independent microgrid with a hydrogen-ba More
      The cellular base stations are communication devices that ensure the connection in the world. Nevertheless, they are usually installed in remote places. This paper, studied the energy procurement of a cellular base stations in an independent microgrid with a hydrogen-based energy storage system, photovoltaic (PV) system, electric vehicles and a diesel generator. A new mixed-integer nonlinear programming model was used to deal with nonlinearities of the system components. The paper studied different uncertainties, such as the connection rate in cellular base stations, the driver of the electric vehicle, and PV generation, using stochastic programming method. The potency of the proposed method was studied in different case studies. The results prove that smart electric vehicle chargers reduce the risks and also cost/emission objective functions. The usage of this model can reduce the emissions as much as 18.60%. Manuscript profile
    Most Viewed Articles

    • Open Access Article

      1 - Determining the factors affecting the collective financing of knowledge-based IT companies
      Ali Haji Gholam Saryazdi ali rajabzadeh ghotri alinaghi mashayekhi alireza hassanzade
      Issue 37 , Vol. 10 , Autumn_Winter 2019
      The method of crowdfunding in the world has expanded rapidly due to the need for financing in the early stages of start-up businesses as well as advances in information technology. In Iran, several financing platforms have been established so far, some of which have bee More
      The method of crowdfunding in the world has expanded rapidly due to the need for financing in the early stages of start-up businesses as well as advances in information technology. In Iran, several financing platforms have been established so far, some of which have been successful and some of which have been unsuccessful. Therefore, it is necessary to help the development of this method by examining the factors affecting it. Since mass financing is a new and new phenomenon, it is necessary to increase its awareness in the society in an appropriate way while determining the factors affecting this method. The collective modeling method is based on social networks and the Web 2 with the aim of recognizing new phenomena. Therefore, in this article, using collective modeling, the factors affecting crowdfunding in Iran in order to support start-up companies in the field of IT are discussed. Manuscript profile

    • Open Access Article

      2 - A model information technology adoption in academic research projects in the filed ICT based on Information Technology Adoption Integrated Modeling (ITAIM)
      Shahram Aliyari masoud movahedi sirous kazemian
      Issue 41 , Vol. 11 , Autumn_Winter 2020
      Today, the emergence and expansion of technologies that provide the widest possible connection have brought about significant changes in the private life and professional life of individuals. Correct implementation of information technology is the source of economic and More
      Today, the emergence and expansion of technologies that provide the widest possible connection have brought about significant changes in the private life and professional life of individuals. Correct implementation of information technology is the source of economic and cultural development and the promotion of quality of life through the exchange of information and the provision of public and private services. The purpose of this research is to present the model of information technology acceptance in Iranian ICT research centers. Employed experts worked on ICT projects in this research Statistical population. It was provided in one of the university searching centers. And so a convenience and purposeful Nonprobability sampling does (30 person). This paper examines the factors and parameters affecting the acceptance of information technology in ICT projects of the university research centers in the field of information technology. To collect the required data and information, a questionnaire was used & to analyze the data and information obtained from the questionnaires using the Spss 22 and Smart pls3 software. According to the calculations, the factors affecting the acceptance of information technology in university research centers can be divided into four categories: IT related factors, organizational factors, factors related to executive director and individual factors that are related to management (0.497), IT (0.460) and Individual factors (0.457) have an impact on the individual acceptance of information technology respectively and organizational factors (0.469) on the adoption of an IT organization Manuscript profile

    • Open Access Article

      3 - Image Processing of steel sheets for Defect Detection by using Gabor Wavelet
      masoud shafiee mostafa sadeghi
      Issue 13 , Vol. 4 , Spring 1391
      In different steps of steel production, various defects appear on the surface of the sheet. Putting aside the causes of defects, precise identification of their kinds helps classify steel sheet correctly, thereby it allocates a high percentage of quality control process More
      In different steps of steel production, various defects appear on the surface of the sheet. Putting aside the causes of defects, precise identification of their kinds helps classify steel sheet correctly, thereby it allocates a high percentage of quality control process. QC of steel sheet for promotion of product quality and maintaining the competitive market is of great importance. In this paper, in addition to quick review of image process techniques used, using image process by means of Gabor wavelet, a fast and precise solution for detection of texture defects in steel sheet is presented. In first step, the approach extracts considerable texture specification from image by using Gabor wavelet. The specification includes both different directions and different frequencies. Then using statistical methods, images are selected that have more obvious defects, and location of defects is determined. Offering the experimental samples, the accuracy and speed of the method is indicated. Manuscript profile

    • Open Access Article

      4 - طراحی اولین پایگاه داده کلمات دستنویس کردی برای سیستم های تشخیص تصویری کلمات
      fatemeh daneshfar basir alagheband vahid sharafi
      Issue 17 , Vol. 5 , Spring_Summer 1393
      چکیده: یکی از اجزای زیربنایی سیستم های تشخیص تصویری کلمات پایگاه داده هاست. هر سیستمی که در این زمینه طراحی گردد لاجرم می بایست از یک نوع پایگاه داده ها استفاده کند. بدیهی است چون موضوع مورد مطالعه در این سیستم ها شکل نوشتاری زبان های مختلف میباشد پس برای هر زبان مشخص More
      چکیده: یکی از اجزای زیربنایی سیستم های تشخیص تصویری کلمات پایگاه داده هاست. هر سیستمی که در این زمینه طراحی گردد لاجرم می بایست از یک نوع پایگاه داده ها استفاده کند. بدیهی است چون موضوع مورد مطالعه در این سیستم ها شکل نوشتاری زبان های مختلف میباشد پس برای هر زبان مشخص پایگاه داده بخصوصی لازم است. زبانی که این مقاله بر آن متمرکز شده کردی است و در این مقاله مراحل مختلف چگونگی طراحی اولین پایگاه داده دستنویس برای زبان کردی شرح داده شده است. از آنجا که تاکنون هیچ پایگاه داده ای مخصوص تشخیص تصویری کلمات، مربوط به زبان کردی طراحی نشده است بنابراین زمینه ای بکر و مستعد برای انجام تحقیق محسوب می گردد. همچنین با توجه به اینکه زبان کردی دارای دو رسم الخط مختلف لاتین و آرامی می باشد در این مقاله منحصرا به رسم الخط آرامی البته از نوع دستنویس آن پرداخته شده است. Manuscript profile

    • Open Access Article

      5 - A Satellite Control Method Using Laguerre Model Predictive Control Approach
      shekoofeh jafari fesharaki farzad tihidkhah heydarali talebi
      Issue 15 , Vol. 5 , Summer 1392
      In this paper a Model Predictive Method based controller is proposed to control a satellite. Model Predictive Control (MPC) has been well known as a practical control method for various systems in industry. A problem with this method is its computational effort and time More
      In this paper a Model Predictive Method based controller is proposed to control a satellite. Model Predictive Control (MPC) has been well known as a practical control method for various systems in industry. A problem with this method is its computational effort and time consuming. To reduce computational load Laguerre functions have been proposed in this literature. Simulation results are given to show feasibility and the validity of the design. A comparison between the time consumed in the presence and the absence of the Laguerre functions is done too. Manuscript profile

    • Open Access Article

      6 - Provide a conceptual model to identify the quality of electronic services in the country's online stores
      marziye rahmani Mohammad Fathian saied yaghoubi
      Issue 30 , Vol. 8 , Autumn_Winter 2016
      This study explains the gap in scientific studies in the field of localization of factors affecting the quality of electronic services in the country's online stores, to present a conceptual model using structural equation modeling. This is a quantitative survey study i More
      This study explains the gap in scientific studies in the field of localization of factors affecting the quality of electronic services in the country's online stores, to present a conceptual model using structural equation modeling. This is a quantitative survey study in which a structured questionnaire is distributed among the buyers of three online stores. Cronbach's alpha is used to evaluate the reliability, heuristic factor analysis to classify sub-indicators and confirmatory factor analysis to examine the validity of the structure and the variables and resulting indices. Manuscript profile

    • Open Access Article

      7 -
      mostafa sadeghi masoud shafiee
      Issue 14 , Vol. 4 , Autumn_Winter 2012

    • Open Access Article

      8 - An Investigation on the Effect of Multifactor Model of Improving Critical Thinking in E-learning Environments
      mohammadreza nili jamshid heydari hossein moradi
      Issue 21 , Vol. 6 , Spring 1394
      In the third millennium, people deal with multiple, diverse, and complicated problems as they cannot possess full control over the information, which is constantly produced and accumulated. Having a high skill of critical thinking for assessing the results of different More
      In the third millennium, people deal with multiple, diverse, and complicated problems as they cannot possess full control over the information, which is constantly produced and accumulated. Having a high skill of critical thinking for assessing the results of different issues and decision making about them based on evidences is an unavoidable necessity. The researchers of this work proposed a model with seven factors (components) for critical thinking in e-learning environments. The statistical group of this work is the M.Sc. medical education students of  AZAD university e-learning environments, and the students of the same field from Islamic Azad University traditional education system studying during 2011-2012. Among the research community, 47 members were selected based on a simple random method and divided into two trial (with 23 members) and reference (with 42 members) groups. To train the trial group, the seven-factor critical thinking training scale was utilized in e-learning environments in 15 sessions with empirical sciences course. In the reference group, the same seven-factor critical thinking training scale was used in the classroom environment in lecturing in 15 sessions with empirical sciences course. The model factors and components are challenge, representation, creation of opportunity, creation of motivation, logical analysis, encouragement, responsibility, and commitment. Both groups were subject to two pretest and posttest steps within two trial groups, which were considered as reference to each other. Both groups responded to the Watson- Glaser™ Critical Thinking Appraisal within two pretest and posttest steps, while the covariance analysis statistical test was used for analysis of the results. The results indicate significant difference between the scores between trial and reference groups in improving the critical thinking of the students in terms of inferential, assumption detection, deduction, interpretation, and logical reasoning evaluation components (p=0.001). According to the results, in terms of improving critical thinking, the trial group trained in the e-learning environment indicates higher scores as compared to the group trained in the traditional classroom environment. Manuscript profile

    • Open Access Article

      9 - Routing improvement to control congestion in software defined networks by using distributed controllers
      saied bakhtiyari Ardeshir Azarnejad
      Issue 39 , Vol. 11 , Spring_Summer 2019
      Software defined networks (SDNs) are flexible for use in determining network traffic routing because they separate data plane and control plane. One of the major challenges facing SDNs is choosing the right locations to place and distribute controllers; in such a way th More
      Software defined networks (SDNs) are flexible for use in determining network traffic routing because they separate data plane and control plane. One of the major challenges facing SDNs is choosing the right locations to place and distribute controllers; in such a way that the delay between controllers and switches in wide area networks can be reduced. In this regard, most of the proposed methods have focused on reducing latency. But latency is just one factor in network efficiency and overall cost reduction between controllers and related switches. This article examines more factors to reduce the cost between controllers and switches, such as communication link traffic. In this regard, a cluster-based algorithm is provided for network segmentation. Using this algorithm, it can be ensured that each part of the network can reduce the maximum cost (including delays and traffic on links) between the controller and its related switches. In this paper, using Topology Zoo, extensive simulations have been performed under real network topologies. The results of the simulations show that when the probability of congestion in the network increases, the proposed algorithm has been able to control the congestion in the network by identifying the bottleneck links in the communication paths of each node with other nodes. Therefore, considering the two criteria of delay and the degree of busyness of the links, the process of placing and distributing the controllers in the clustering operation has been done with higher accuracy. By doing so, the maximum end-to-end cost between each controller and its related switches, in the topologies Chinanet of China, Uunet of the United States, DFN of Germany, and Rediris of Spain, is decreased 41.2694%, 29.2853%, 21.3805% and 46.2829% respectively. Manuscript profile

    • Open Access Article

      10 - Medial-axis Enhancement of Tubular Structures and its Application in the Extraction of Portal Veins
      amirhossein forouza reza aghaeizade youshi sato masa houri
      Issue 13 , Vol. 4 , Spring 1391
      I In this paper, a new filter is designed to enhance medial-axis of tubular structures. Based on a multi-scale method and using eigenvectors of Hessian matrix, the distance of a point to the edges of the tube is found. To do this, a hypothetical line with a deliberate d More
      I In this paper, a new filter is designed to enhance medial-axis of tubular structures. Based on a multi-scale method and using eigenvectors of Hessian matrix, the distance of a point to the edges of the tube is found. To do this, a hypothetical line with a deliberate direction is passed through the point which cuts the tube at its edges. For points which are located on the medial-axis, this distance is symmetric with respect to any deliberate direction. We find samples of the distances in different directions and assign a measure to the points based on this symmetry property. The output of this step is an enhanced image in which noise is removed and tubes can be seen more clearly. Then, we employ the filter developed by Pock et al. to enhance medial axis. Evaluation of the proposed method is performed using 2D/3D synthetic/clinical datasets both quantitatively and qualitatively. Manuscript profile
    Upcoming Articles

    • Open Access Article

      1 - Improving resource allocation in mobile edge computing using gray wolf and particle swarm optimization algorithms
      seyed ebrahim dashti saeid shabooei
      Mobile edge computing improves the experience of end users to achieve appropriate services and service quality. In this paper, the problem of improving resource allocation when offloading tasks based on mobile devices to edge servers in computing systems was investigate More
      Mobile edge computing improves the experience of end users to achieve appropriate services and service quality. In this paper, the problem of improving resource allocation when offloading tasks based on mobile devices to edge servers in computing systems was investigated. Some tasks are processed locally and some are offloaded to edge servers. The main issue is that the offloaded tasks for virtual machines in computing networks are properly scheduled to minimize computing time, service cost, computing network waste, and the maximum connection of a task with the network. In this paper, it was introduced using the hybrid algorithm of particle swarm and gray wolf to manage resource allocation and task scheduling to achieve an optimal result in edge computing networks. The comparison results show the improvement of waiting time and cost in the proposed approach. The results show that, on average, the proposed model has performed better by reducing the work time by 10% and increasing the use of resources by 16%. Manuscript profile

    • Open Access Article

      2 - Identifying and ranking factors affecting the digital transformation strategy in Iran's road freight transportation industry focusing on the Internet of Things and data analytics
      Mehran Ehteshami Mohammad Hasan Cheraghali Bita Tabrizian Maryam Teimourian sefidehkhan
      This research has been done with the aim of identifying and ranking the factors affecting the digital transformation strategy in Iran's road freight transportation industry, focusing on the Internet of Things and data analytics. After reviewing the literature, semi-stru More
      This research has been done with the aim of identifying and ranking the factors affecting the digital transformation strategy in Iran's road freight transportation industry, focusing on the Internet of Things and data analytics. After reviewing the literature, semi-structured interviews were conducted with 20 academic and road freight transportation industry experts in Iran, who were selected using the purposive sampling method and saturation principle. In the quantitative part, the opinions of 170 employees of this industry, who were selected based on Cochran's formula and stratified sampling method, were collected using a researcher-made questionnaire. Delphi technique, literature review and coding were used to analyze the data in the qualitative part. In the quantitative part, inferential statistics and SPSS and smartPLS software were used. Finally, 40 indicators were extracted in the form of 8 factors and ranking of indicators and affecting factors was done using factor analysis. The result of this research shows that the internal factors have the highest rank and software infrastructure, hardware infrastructure, economic, external factors, legal, cultural and penetration factor are in the next ranks respectively. Therefore, it is suggested that organizations consider their human resource empowerment program in line with the use of technology and digital tools. Manuscript profile

    • Open Access Article

      3 - Persian Stance Detection Based On Multi-Classifier Fusion
      Mojgan Farhoodi Abbas Toloie Eshlaghy محمدرضا معتدل
      Stance detection (also known as stance classification, stance prediction, and stance analysis) is a recent research topic which has become an emerging opinion mining paradigm of importance. Stance detection is to determine the author's viewpoint toward a given target. I More
      Stance detection (also known as stance classification, stance prediction, and stance analysis) is a recent research topic which has become an emerging opinion mining paradigm of importance. Stance detection is to determine the author's viewpoint toward a given target. It is a core part of a set of approaches to fake news assessment. In this paper, we applied three approach includes machine learning, deep learning and transfer learning for Persian stance detection and then proposed a framework of multi-classifier fusion for getting final decision on output results. We used majority voting fusing method based on accuracy of the classifiers to combine their results and achieve the final estimation. The experimental results showed that the performance of proposed multi-classifier fusion method is better than individual classifiers. Manuscript profile

    • Open Access Article

      4 - Intrusion Detection Based on Cooperation on the Permissioned Blockchain Platform in the Internet of Things Using Machine Learning
      Mohammad Mahdi  Abdian majid ghayori Seyed Ahmad  Eftekhari
      Intrusion detection systems seek to realize several objectives, such as increasing the true detection rate, reducing the detection time, reducing the computational load, and preserving the resulting logs in such a way that they cannot be manipulated or deleted by unauth More
      Intrusion detection systems seek to realize several objectives, such as increasing the true detection rate, reducing the detection time, reducing the computational load, and preserving the resulting logs in such a way that they cannot be manipulated or deleted by unauthorized people. Therefore, this study seeks to solve the challenges by benefiting from the advantages of blockchain technology, its durability, and relying on IDS architecture based on multi-node cooperation. The proposed model is an intrusion detection engine based on the decision tree algorithm implemented in the nodes of the architecture. The architecture consists of several connected nodes on the blockchain platform. The resulting model and logs are stored on the blockchain platform and cannot be manipulated. In addition to the benefits of using blockchain, reduced occupied memory, the speed, and time of transactions are also improved by blockchain. In this research, several evaluation models have been designed for single-node and multi-node architectures on the blockchain platform. Finally, proof of architecture, possible threats to architecture, and defensive ways are explained. The most important advantages of the proposed scheme are the elimination of the single point of failure, maintaining trust between nodes, and ensuring the integrity of the model, and discovered logs. Manuscript profile

    • Open Access Article

      5 - A Horizon for Sentiment Analysis in Social Networks based on Interpreting Contents
      Maryam Tayefeh Mahmoudi َAmirmansour  Yadegari Parvin Ahmadi Kambiz Badie
      Interpreting contents in social networks with the aim of analyzing the sentiment of their narrators is of particular significance. In this paper, we present a framework for such a purpose, which is able to classify the messages hidden in contents based on using some rul More
      Interpreting contents in social networks with the aim of analyzing the sentiment of their narrators is of particular significance. In this paper, we present a framework for such a purpose, which is able to classify the messages hidden in contents based on using some rule-type protocols with high abstraction level. According to this framework, items such as prosodic of a content's narrator, context of disseminating a content and the key propositions in a content's text are regarded in the condition part of a protocol, while the possible classes for the message in a content are considered as its action part. It is to be noted that the proposed rule-type protocols can equally be used for other languages due to the generic-ness of the above-mentioned items. Results of computer simulations on a variety of different contents in the social networks show that the proposed framework is sufficiently capable of analyzing the sentiment of the contents' narrators in these networks. Manuscript profile

    • Open Access Article

      6 - Multi-level ternary quantization for improving sparsity and computation in embedded deep neural networks
      Hosna Manavi Mofrad Seyed Ali ansarmohammadi Mostafa Salehi
      Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and co More
      Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and computational resources. Network compression techniques can overcome these challenges. Quantization and pruning methods are the most important compression techniques among them. One of the famous quantization methods in DNNs is the multi-level binary quantization, which not only exploits simple bit-wise logical operations, but also reduces the accuracy gap between binary neural networks and full precision DNNs. Since, multi-level binary can’t represent the zero value, this quantization does’nt take advantage of sparsity. On the other hand, it has been shown that DNNs are sparse, and by pruning the parameters of the DNNs, the amount of data storage in memory is reduced while computation speedup is also achieved. In this paper, we propose a pruning and quantization-aware training method for multi-level ternary quantization that takes advantage of both multi-level quantization and data sparsity. In addition to increasing the accuracy of the network compared to the binary multi-level networks, it gives the network the ability to be sparse. To save memory size and computation complexity, we increase the sparsity in the quantized network by pruning until the accuracy loss is negligible. The results show that the potential speedup of computation for our model at the bit and word-level sparsity can be increased by 15x and 45x compared to the basic multi-level binary networks. Manuscript profile

    • Open Access Article

      7 - Anomaly and Intrusion Detection through Datamining and Feature Selection using PSO Algorithm
      Fereidoon Rezaei Mohamad Ali Afshar Kazemi Mohammad Ali Keramati
      Today, considering technology development, increased use of Internet in businesses, and movement of business types from physical to virtual and internet, attacks and anomalies have also changed from physical to virtual. That is, instead of thieving a store or market, th More
      Today, considering technology development, increased use of Internet in businesses, and movement of business types from physical to virtual and internet, attacks and anomalies have also changed from physical to virtual. That is, instead of thieving a store or market, the individuals intrude the websites and virtual markets through cyberattacks and disrupt them. Detection of attacks and anomalies is one of the new challenges in promoting e-commerce technologies. Detecting anomalies of a network and the process of detecting destructive activities in e-commerce can be executed by analyzing the behavior of network traffic. Data mining systems/techniques are used extensively in intrusion detection systems (IDS) in order to detect anomalies. Reducing the size/dimensions of features plays an important role in intrusion detection since detecting anomalies, which are features of network traffic with high dimensions, is a time-consuming process. Choosing suitable and accurate features influences the speed of the proposed task/work analysis, resulting in an improved speed of detection. In this article, by using data mining algorithms such as J48 and PSO, we were able to significantly improve the accuracy of detecting anomalies and attacks. Manuscript profile

    • Open Access Article

      8 - Identifying the Key Drivers of Digital Signature Implementation in Iran (using fuzzy Delphi method)
      Ghorbanali Mehrabani Fatemeh Zargaran khouzani
      iThe purpose of this article is to identify and analyze the key drivers of digital signature implementation in Iran with a fuzzy Delphi approach. In terms of practical purpose and in terms of information gathering, the research has benefited from a hybrid approach. The More
      iThe purpose of this article is to identify and analyze the key drivers of digital signature implementation in Iran with a fuzzy Delphi approach. In terms of practical purpose and in terms of information gathering, the research has benefited from a hybrid approach. The statistical community consists of all experts and specialists in the field of information technology and digital signature and articles in this field. The sample size of the statistical community of experts is 13 people who were selected by the purposeful sampling method. 30 articles were selected based on their availability and downloadable, non-technical nature, and relevance to the topic. The method of data analysis was done according to the fuzzy Delphi approach. Validity and reliability were calculated and confirmed using the CVR index and Cohen's kappa test with coefficients of 0.83 and 0.93, respectively. The results prove that the key drivers of digital signature implementation in Iran include 5 main dimensions and 30 concepts, which are 1) security (information confidentiality, information security, sender authentication, document authentication, privacy protection, trust between parties), 2) business (digital business models, communication needs, staff management, organization size, organizational structure, organization resources, organizational culture, top managers, competition ecosystem, e-governance), 3) user (perceived convenience, perceived benefit, consumer behavior, consumer literacy, consumer lifestyle), 4) technical (development of technical infrastructure, systems integration, system complexity, system tanks, design quality, technical speed of certificate production and verification, impermeability of hackers) and 5) Legal (legal licenses, penal laws, legislative body, e-commerce laws). Manuscript profile

    • Open Access Article

      9 - Generalizing The Concept of Business Processes Structural Soundness from Classic Petri-nets to BPMN2.0 Process Models
      Yahya Poursoltani Mohammad Hassan Shirali-Shahreza S. Alireza Hashemi Golpayegani
      BPMN2.0 Standard is a modeling language, which can be understood and used by a wide range of users. However, because of its non-formal nature, models (designed using it) can be containing structural errors such as Deadlock (impossibility of executing some of process tas More
      BPMN2.0 Standard is a modeling language, which can be understood and used by a wide range of users. However, because of its non-formal nature, models (designed using it) can be containing structural errors such as Deadlock (impossibility of executing some of process tasks) and Livelock (infinite repetition of tasks) may be produced by using them. These semantic errors can create anomalies in the workflow of the organization. So far, some researches has been conducted on the validation of these process models and various solutions have been provided to discover some of these structural errors. The question that may be raised about these methods is whether it is possible to definitely guarantee the structural accuracy of a BPMN method model by using any of them? To answer this question, we need a comprehensive definition of a correct BPMN2.0 process model, based on which we can evaluate the comprehensiveness of validation methods and strongly make sure that the considered method can discover all of the structural errors of the process model. In this paper, based on concept of general process models and the concept of soundness (based on process models created using Petri nets) and the generalization of its properties, i.e. Liveness and Boundness to BPMN2.0 process models, a comprehensive definition for a correct (sound) BPMN2 process model provided. Then, the comprehensiveness of the suggested methods of some of the most important researches conducted has been evaluated based on it. This definition can be used as a measure for efficiency of BPMN validation methods. Manuscript profile

    • Open Access Article

      10 - Enteprise ontology based on intelligent agents. case study: knowledge based production export actors.
      mohammad rahim banakar ُShaban Elahi shaghayegh sahraee
      To develop ontology and investigate the role of intelligent factors in enriching ontology. On the other hand, the study of trade documents in the Trade Development Organization and Iran's Export and Import Statistics in the past few years creates the need in practice to More
      To develop ontology and investigate the role of intelligent factors in enriching ontology. On the other hand, the study of trade documents in the Trade Development Organization and Iran's Export and Import Statistics in the past few years creates the need in practice to address Iran's trade weaknesses by an ontology of knowledge-based export actors. Methodology: Grounded theory (Systematic Litreture Review) Conclusion: Ontology is the formal representation of the entities of a particular domain, which makes it possible to represent tacit and explicit knowledge of a particular field as a knowledge base through field research. Intelligent agent capabilities can be used to create an automation process in the development and enrichment of ontology. A systematic review is needed to evaluate, summarize, and compile studies on ontology and intelligent agents and establish an interaction between the two. A search of the database from January 2017 to the end of 2021 identified 38 relevant articles eligible for a qualitative evaluation of 15 articles. Intelligent agents identified the following nine concepts of ontology enrichment. Manuscript profile

    • Open Access Article

      11 - Fault detection and identification in photovoltaic systems using VGG16 deep neural network
      ُSamaneh Azimi Mohammad Manthouri Mehdi Akhbari
      Fault detection in photovoltaic (PV) arrays is necessary to increase the output power and also the useful life of a PV system. The presence of conditions such as partial shade, high impedance faults, and the maximum power point detector (MPPT) system make the fault dete More
      Fault detection in photovoltaic (PV) arrays is necessary to increase the output power and also the useful life of a PV system. The presence of conditions such as partial shade, high impedance faults, and the maximum power point detector (MPPT) system make the fault detection of PV in environmental conditions more challenging. The literature identified and classified defects just in few scenarios. In this study two-dimensional scalograms are generated from PV system data. The VGG16 as a pretrained convolutional neural network is used for feature extraction. Finally, to identify and classify faults in the PV system a fully connected neural network is trained. Unlike the previous methods proposed in the literature on the subject of defect detection and classification, various defective cases with MPPT combination are considered in this research. It has been shown that the proposed method including pre-trained CNN performs better than the existing methods and achieves an error detection accuracy of 83.375%. Manuscript profile

    • Open Access Article

      12 - Presenting the ICT Policies Implementation Model of the 6th Development Using the Neural Network Method
      Nazila Mohammadi Gholamreza   Memarzadeh Tehran Sedigheh Tootian Isfahani
      It is inevitable to properly manage the implementation of information and communication technology policies in a planned way in order to improve the country's position in the fields of science and technology. The purpose of this research is to provide a model of the eff More
      It is inevitable to properly manage the implementation of information and communication technology policies in a planned way in order to improve the country's position in the fields of science and technology. The purpose of this research is to provide a model of the effective factors on the implementation of Iran's ICT policies with the help of the neural network technique and based on Giddens' constructive theory. From the point of view of conducting it, this research is of a survey type and based on the purpose, it is of an applied type because it is trying to use the results of the research in the Ministry of Communication and Information Technology and the Iranian Telecommunications Company. Data collection is based on library and field method. The tool for collecting information is research researcher-made questionnaire. The statistical population of the research is information and communication technology experts at the headquarters of Iran Telecommunication Company (810 people), of which 260 people were randomly selected as a sample based on Cochran's formula. MATLAB software was used for data analysis. According to the findings, the best combination for development is when all input variables are considered at the same time, and the worst case is when the infrastructure development variable is ignored, and the most important based on network sensitivity analysis is related to infrastructure development and the least important is related to content supply. Manuscript profile

    • Open Access Article

      13 - Community Detection in Bipartite Networks Using HellRank Centrality Measure
      Ali Khosrozadeh Ali Movaghar Mohammad Mehdi Gilanian Sadeghi Hamidreza Mahyar
      Community structure is a common and important feature in many complex networks, including bipartite networks. In recent years, community detection has received attention in many fields and many methods have been proposed for this purpose, but the heavy consumption of ti More
      Community structure is a common and important feature in many complex networks, including bipartite networks. In recent years, community detection has received attention in many fields and many methods have been proposed for this purpose, but the heavy consumption of time in some methods limits their use in large-scale networks. There are methods with lower time complexity, but they are mostly non-deterministic, which greatly reduces their applicability in the real world. The usual approach that is adopted to community detection in bipartite networks is to first construct a unipartite projection of the network and then communities detect in that projection using methods related to unipartite networks, but these projections inherently lose information. In this paper, based on the bipartite modularity measure that quantifies the strength of partitions in bipartite networks and using the HellRank centrality measure, a quick and deterministic method for community detection from bipartite networks directly and without need to projection, proposed. The proposed method is inspired by the voting process in election activities in the social society and simulates it. Manuscript profile

    • Open Access Article

      14 - Test case Selection based on Test-Driven Development
      Zohreh Mafi mirian mirian
      Test-Driven Development (TDD) is one of the test-first software production methods in which the production of each component of the code begins with writing the test case. This method has been noticed due to many advantages, including the readable, regular and short cod More
      Test-Driven Development (TDD) is one of the test-first software production methods in which the production of each component of the code begins with writing the test case. This method has been noticed due to many advantages, including the readable, regular and short code, as well as increasing the quality, productivity and reliability, and the possibility of regression testing due to the creation of a comprehensive set of unit tests. The large number of unit test cases produced in this method is considered as a strong point in order to increase the reliability of the code, however, the repeated execution of test cases increases the duration of the regression testing in this method. The purpose of this article is to present an algorithm for selecting test cases to reduce the time of the regression test in TDD method. So far, various ideas have been proposed to select test cases and reduce the regression test time. Most of these ideas are based on programming language and software production methods. The idea presented in this article is based on the program difference method and the nature of the TDD method. In this method, meaningful semantic and structural connections are created between unit tests and code blocks, and the test case selection is done based on these relationships. Manuscript profile

    • Open Access Article

      15 - An Overview on Replica Consistency Methods in Distributed Systems and Future Works
      mahsa beigrezaei
      Nowadays, applications generate huge amounts of data, in the range of several terabytes or petabytes. This data is shared among many users around the world. Distributed systems such as grid and cloud provide a suitable platform for these applications, enabling the use o More
      Nowadays, applications generate huge amounts of data, in the range of several terabytes or petabytes. This data is shared among many users around the world. Distributed systems such as grid and cloud provide a suitable platform for these applications, enabling the use of these diverse mass data applications in a distributed manner. In these systems, they use data replication to face performance problems, guarantee service quality, and increase data accessibility. Replication, despite its many advantages, also brings administrative costs. The balance between the consistency cost of replication and the benefits of replication is a hotly debated topic among researchers in this field. Therefore, paying attention to the consistency of replication plays an effective role in the efficiency of these systems. Many strategies have been proposed by researchers in the field of data replication consistency. Each of these strategies try to reduce consistency costs and provide effective solutions in this field by considering various parameters such as read rate, write rate, old data tolerance rate, number of replicas and communication bandwidth in determining the consistency levels of replicas. In this article, we will examine the concepts related to replication and replica consistency and categorize its types and review previous works in this field. The done works have been compared from the perspective of system type, decision parameters, compatibility model and improved parameters. At the end, the open issues in this field are raised. Manuscript profile

    • Open Access Article

      16 - Application identification through intelligent traffic classification
      Shaghayegh Naderi
      Traffic classification and analysis is one of the big challenges in the field of data mining and machine learning, which plays an important role in providing security, quality assurance and network management. Today, a large amount of transmission traffic in the network More
      Traffic classification and analysis is one of the big challenges in the field of data mining and machine learning, which plays an important role in providing security, quality assurance and network management. Today, a large amount of transmission traffic in the network is encrypted by secure communication protocols such as HTTPS. Encrypted traffic reduces the possibility of monitoring and detecting suspicious and malicious traffic in communication infrastructures (instead of increased security and privacy of the user) and its classification is a difficult task without decoding network communications, because the payload information is lost, and only the header information (which is encrypted too in new versions of network communication protocols such as TLS1.03) is accessible. Therefore, the old approaches of traffic analysis, such as various methods based on port and payload, have lost their efficiency, and new approaches based on artificial intelligence and machine learning are used in cryptographic traffic analysis. In this article, after reviewing the traffic analysis methods, an operational architectural framework for intelligent traffic analysis and classification has been designed. Then, an intelligent model for Traffic Classification and Application Identification is presented and evaluated using machine learning methods on Kaggle141. The obtained results show that the random forest model, in addition to high interpretability compared to deep learning methods, has been able to provide high accuracy in traffic classification compared to other machine learning methods. Finally, tips and suggestions about using machine learning methods in the operational field of traffic classification have been provided. Manuscript profile

    • Open Access Article

      17 - Survey on the Applications of the Graph Theory in the Information Retrieval
      Maryam Piroozmand Amir Hosein Keyhanipour Ali Moeini
      Due to its power in modeling complex relations between entities, graph theory has been widely used in dealing with real-world problems. On the other hand, information retrieval has emerged as one of the major problems in the area of algorithms and computation. As graph- More
      Due to its power in modeling complex relations between entities, graph theory has been widely used in dealing with real-world problems. On the other hand, information retrieval has emerged as one of the major problems in the area of algorithms and computation. As graph-based information retrieval algorithms have shown to be efficient and effective, this paper aims to provide an analytical review of these algorithms and propose a categorization of them. Briefly speaking, graph-based information retrieval algorithms might be divided into three major classes: the first category includes those algorithms which use a graph representation of the corresponding dataset within the information retrieval process. The second category contains semantic retrieval algorithms which utilize the graph theory. The third category is associated with the application of the graph theory in the learning to rank problem. The set of reviewed research works is analyzed based on both the frequency as well as the publication time. As an interesting finding of this review is that the third category is a relatively hot research topic in which a limited number of recent research works are conducted. Manuscript profile

    • Open Access Article

      18 - The relevance, importance and dependence of critical infrastructures of the Islamic Republic of Iran from a cyber perspective
      Hossein Gharaee Garakani ابوذر صولت رفیعی Fatemeh saghafi Mohammad Malekinia
      In recent years, cyber attacks on the critical infrastructures of countries have increased significantly. The types of critical infrastructure and their dependencies based on national requirements are different from one country to another. Disruption in the mission or s More
      In recent years, cyber attacks on the critical infrastructures of countries have increased significantly. The types of critical infrastructure and their dependencies based on national requirements are different from one country to another. Disruption in the mission or services of a critical infrastructure has a cascading effect on other infrastructures and causes them serious problems. In different studies, different approaches have been taken to model these dependencies, the important point is not to generalize those models to other countries due to the national requirements of each country. In this research, by forming 11 focus groups consisting of top and middle managers of each infrastructure area, the network analysis method based on the DEMATEL technique was used, and the most influential and influential critical infrastructure from a cyber perspective on other critical infrastructures was identified and the relationship between the infrastructure Critical issues and their prioritization were determined from a cyber perspective. The results of this research can be useful in the design of the national warning sharing system in order to calculate the national situational awareness in the cyber field and other researches based on the dependence of critical infrastructures. Manuscript profile

    • Open Access Article

      19 - The belief of Persian text mining based on deep learning with emotion-word separation
      Hossein Alikarami AmirMasoud Bidgoli Hamid Haj Seyyed Javadi
      Belief analysis or the classification of texts based on the feelings and opinions of users on websites and social media helps people, companies and organizations to make important decisions. Belief mining includes a system for analyzing people's opinions and feelings ab More
      Belief analysis or the classification of texts based on the feelings and opinions of users on websites and social media helps people, companies and organizations to make important decisions. Belief mining includes a system for analyzing people's opinions and feelings about an entity such as products, people, organizations, according to the opinions, messages and tweets of users in social media. In this article, the belief analysis of Persian texts based on the messages, comments and tweets of users in social media and websites of 4 datasets using two deep learning methods, CNN, LSTM, taking into account the sense of the word, in two poles, positive and negative with intervals. 2- and 2+ are classified. In the proposed method, first the process of data pre-processing based on character to number conversion, removing the list of extra words and multi-word analysis is done, then for belief analysis and classification of Persian texts CNN, LSTM machine learning algorithm with word sense separation (WSD) is used to Recognize the intensity of emotions according to the words. We call the proposed model CNN_WSD and LSTM_WSD. In the proposed method, the Persian Twitter dataset is used for evaluation and then it is compared with other machine learning and deep learning methods, DNN, CNN, LSTM, in the implementation of this method, python software is used. The accuracy rate of the proposed method for LSTM-WSD and CNN-WSD is 95.8 and 94.3%, respectively. Manuscript profile

    • Open Access Article

      20 - Energy-Efficient Fixed-Point Hardware Accelerator for Embedded DNNs
      Mostafa Salehi Marzie Mastalizade ali ansarmohammadi Najme Nazari
      Deep Neural Networks (DNNs) have demonstrated remarkable performance in various application domains, such as computer vision, pattern recognition, and natural language processing. However, deploying these models on edge-computing devices poses a challenge due to their e More
      Deep Neural Networks (DNNs) have demonstrated remarkable performance in various application domains, such as computer vision, pattern recognition, and natural language processing. However, deploying these models on edge-computing devices poses a challenge due to their extensive memory requirements and computational complexity. These factors make it difficult to deploy DNNs on low-power and limited-resource devices. One promising technique to address this challenge is quantization, particularly fixed-point quantization. Previous studies have shown that reducing the bit-width of weights and activations, such as to 3 or 4 bits, through fixed-point quantization can preserve the classification accuracy of full-precision neural networks. Despite extensive research on the compression efficiency of fixed-point quantization techniques, their energy efficiency, a critical metric in evaluating embedded systems, has not been thoroughly explored. Therefore, this research aims to assess the energy efficiency of fixed-point quantization techniques while maintaining accuracy. To accomplish this, we present a model and design an architecture for each quantization method. Subsequently, we compare their area and energy efficiency at the same accuracy level. Our experimental results indicate that incorporating scaling factors and offsets into LSQ, a well-known quantization method, improves DNN accuracy by 0.1%. However, this improvement comes at the cost of a 3× decrease in hardware energy efficiency. This research highlights the significance of evaluating fixed-point quantization techniques not only in terms of compression efficiency but also in terms of energy efficiency when applied to edge-computing device. Manuscript profile

    • Open Access Article

      21 - Explanation the role of standardization in the proposed solutions for privacy protection in health data
      batool mehrshad Mohammad mehraeen Mohammad Khansari saeed mortazavi
      Introduction: Due to the importance of data sharing in the digital era and the two main considerations related to it that are; standardization and privacy protection,this article aims to answer a critical question that is,does standardization play a role in the proposed More
      Introduction: Due to the importance of data sharing in the digital era and the two main considerations related to it that are; standardization and privacy protection,this article aims to answer a critical question that is,does standardization play a role in the proposed solutions for health data privacy protection? Methods: The present study is a systematic review conducted by searching databases such as Web of Science, PubMed, ScienceDirect, Springer, Magiran and SID and by applying a time limit filter.After applying the criteria for inclusion and exclusion and evaluating the results,relevant studies were selected. Findings: Articles addressing standardization and privacy protection in health data have been analyzed by taking 5 indicators into account. The need for standardization and its role to preserve privacy in health data have also been explained by examining the findings and discussing various laws related to privacy in the health field and its relationship with standardization.After the investigation,our study reveals that due to the technical structure of the fourth and fifth generation of health care, which has facilitated standardization, privacy protection can also be achieved through standardization.Finally,directions for future research on this topic are also suggested. Conclusion: The results of this research showed that the fourth- and fifth-generation health care systems that are technology-oriented; are formed based on standards,and these standards provide the possibility of their evaluation. Thus if laws related to health data privacy protection are developed based on standards,they will have a high execution guarantee. This further highlights the critical role of standard development organizations in this field. Manuscript profile
  • Affiliated to
    Iranian Information and Communication Technology Association
    Director-in-Charge
    Masoud Shafiee (Amirkabir University of Technology)
    Editor-in-Chief
    Mohammad-Shahram Moin (Research Institute of Communication and Information Technology)
    Executive Manager
    Mohammad-Shahram Moin (Research Institute of Communication and Information Technology)
    Editorial Board
    H. Nezamabadi-pour Ramezan Ali Sadeghzadeh (دانشگاه صنعتی خواجه نصیر الدین طوسی) Hassan Rashidi (دانشگاه علامه طباطبائی) Alireza Behrad (دانشگاه شاهد) Mir Mohsen Pedram Fatemeh saghafi (دانشگاه تهران) ميرهادي سيد عربي mohamad hesam tadayon mohsen Ebrahimi-Moghaddam sadaf salehi Masoud Shafiee (Amirkabir University of Technology) Mohamadreza Aref (Sharif University of Technology) Ahmad Motamedi (Amirkabir University of Technology) Ahmad-Reza Sherafat Reza FarajiDana Mohammad Teshnelab (Khajeh Nasir al-Din Tusi University of Technology) Mohammad-Shahram Moin (پژوهشگاه ارتباطات و فناوری اطلاعات) mahmoud kamarei (دانشگاه تهران)
    Print ISSN: 2717-0411
    Online ISSN:2783-2783

    Publication period: TwoQuarterly
    Email
    jouraict@gmail.com
    Address
    Room 612, 6th Floor, Aboureyhan Bldg., AmirKabir University of Technology, Hafez Avenue, Tehran, Iran
    Phone
    021-66495433
    Fax
    66495433

    Search

    Statistics

    Last Update 3/5/2024