List of articles (by subject) ICT


    • Open Access Article

      1 - An Information Architecture Framework for Utilizing Social Networks in Iranian Higher Education System
      mehrab ali golshani rosta مهرابعلی  گلشنی‌روستا
      Management of social networks, has become a strategic challenge for different applications including education due to its growing importance. Enterprise Architecture (EA), uses a holistic specification of information technology functions in organizations to decrease the More
      Management of social networks, has become a strategic challenge for different applications including education due to its growing importance. Enterprise Architecture (EA), uses a holistic specification of information technology functions in organizations to decrease the complexity of using information technology and to increase its efficiency. As regards, using social networks in education in most countries of the world has not yet finished its preliminary stages, so for this reason there is no standard framework and model. The aim of this paper is to design an appropriate architecture framework for utilizing social networks in higher education system in Iran. To do this,first the concept of social networks and its applications at educational environment have been investigated.Then the concept of enterprise architecture and information architecture framework are studied, Zachman framework has been selected as the main tool and then using questionnaire the vital aspects of implementing social networks in higher education have been identified from the views of experts. The findings of study indicate the main reasons for the use of social networks in higher education (strategy), the most important actors in this field (people), the infrastructure needed (infrastructure), the data and information required in this environment (data) and also the processes required to fulfill the social network of learning (process). The main characteristic of the final framework is a presentation of a comprehensive framework for using social network in education system with attention to local considerations. Manuscript profile
    • Open Access Article

      2 - The Impact of IQ Imbalance on the BER of Adaptive Modulation in the MIMO System
      Hooman Tahayori abbas mohammadi abdolali abdipour
      Abstract The effect of IQ imbalance on adaptive MQAM modulation in MIMO systems with direct conversion receivers is investigated in this paper. A continues power, discrete rate adaptive modulation scheme in using MIMO system with perfect channel information in both tran More
      Abstract The effect of IQ imbalance on adaptive MQAM modulation in MIMO systems with direct conversion receivers is investigated in this paper. A continues power, discrete rate adaptive modulation scheme in using MIMO system with perfect channel information in both transmitter and receiver is considered. The selected adaptation mechanism changes the transmit power and transmit constellation size under an average transmit power and instantaneous bit error rate (BER) constraints to obtain the maximum average spectral efficiency. First, a closed from expression for the BER of MQAM modulation under the effect of IQ imbalance is introduce. Then, the impact of IQ imbalance on the BER of adaptive modulation in MIMO system is studied. This effect is also investigated for the different configuration of trancmit-recive antenna. The analytic results are compared with the Mont Carlo simulation results to verify the analytic expression. Manuscript profile
    • Open Access Article

      3 - Design and implementation of a fuzzy expert system for suspicious behavior detection in e-banking system
      leila saroukhani gholamali montazer
      Abstract One of the most important obstacles to use Internet Banking is lack of security and some abuses in such systems. Therefore, fraud and suspicious behavior detection, to prevent unauthorized accesses, are of great importance in financial organizations and banks. More
      Abstract One of the most important obstacles to use Internet Banking is lack of security and some abuses in such systems. Therefore, fraud and suspicious behavior detection, to prevent unauthorized accesses, are of great importance in financial organizations and banks. In this paper, an expert system is designed to detect users’ unusual or suspicious behaviors in the internet banking system. As different users’ behaviors may be ambiguous and uncertain the system has been designed based on fuzzy notions in such a way that it can recognize the users’ behaviors and classify the suspicious ones according to their intensity. The proposed system has been implemented and tested on the Internet-Banking system for Bank Mellat, which has the largest on-line banking services in Iran and the results showed that the system detects successfully the unusual behavior. Manuscript profile
    • Open Access Article

      4 - A new reinforcement learning based multi-agent method for traffic shaping and buffer allocation in routers
      Hooman Tahayori
      Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 More
      Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal" mso-tstyle-rowband-size:0 mso-tstyle-colband-size:0 mso-style-noshow:yes mso-style-priority:99 mso-style-qformat:yes mso-style-parent:"" mso-padding-alt:0cm 5.4pt 0cm 5.4pt mso-para-margin:0cm mso-para-margin-bottom:.0001pt mso-pagination:widow-orphan font-size:11.0pt font-family:"Calibri","sans-serif" mso-ascii-font-family:Calibri mso-ascii-theme-font:minor-latin mso-fareast-font-family:"Times New Roman" mso-fareast-theme-font:minor-fareast mso-hansi-font-family:Calibri mso-hansi-theme-font:minor-latin mso-bidi-font-family:Arial mso-bidi-theme-font:minor-bidi} Abstract In this paper, realizing the distributed structure of computer networks, the random behaviors in such networks, and the time limitations for control algorithms, the concepts of reinforcement learning and multi-agent systems are invoked for traffic shaping and buffer allocation between various ports of a router. In fact, a new traffic shaper based on token bucket has been developed. In this traffic shaper, instead of a static token production rate, a dynamic and intelligent rate based on the network condition is specified. This results in a reasonable utilization of bandwidth while preventing traffic overload in other part of the network. Besides, based on the stated techniques a new method for dynamic buffer allocation in the ports of a router is developed. This leads to a reduction in the total number of packet dropping in the whole network. Simulation results show the effectiveness of the proposed techniques. Manuscript profile
    • Open Access Article

      5 - Evaluation of SIP signaling implementation using QoS parameters
      mojtaba jahanbakhsh azharivs azharivs maryam homayooni Ahmad akbari
      Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 More
      Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 !mso]> st1:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal" mso-tstyle-rowband-size:0 mso-tstyle-colband-size:0 mso-style-noshow:yes mso-style-priority:99 mso-style-qformat:yes mso-style-parent:"" mso-padding-alt:0cm 5.4pt 0cm 5.4pt mso-para-margin:0cm mso-para-margin-bottom:.0001pt mso-pagination:widow-orphan font-size:11.0pt font-family:"Calibri","sans-serif" mso-ascii-font-family:Calibri mso-ascii-theme-font:minor-latin mso-fareast-font-family:"Times New Roman" mso-fareast-theme-font:minor-fareast mso-hansi-font-family:Calibri mso-hansi-theme-font:minor-latin mso-bidi-font-family:Arial mso-bidi-theme-font:minor-bidi} Abstract The variety of services on IP networks and the need for network technology convergence have resulted in many access networks to adopt the IP technology. The Session Initiation Protocol (SIP) is an end to end application level protocol for establishing, terminating and modifying sessions and has experienced widespread use in IP networks due to its distinguished features such as being text based, independence from the underlying network, and more importantly supporting various types of mobility. In fact these features have lead SIP to be used as the core signaling protocol in the IP Multimedia Subsystem, which is the control plane proposed for next generation networks by the 3GPP community. Nevertheless, the performance of SIP servers when used by the millions of users of the next generation networks is not well established. In this paper we evaluate the performance of SIP servers using a test bed developed at the Iran University of Science & Technology. We consider eight different configurations for SIP server and also study the effect of using TCP and UDP as the transport protocol for SIP packets. We measure several parameters including call setup delay, call failure rate and SIP server throughput. Our results suggest that using SIP in large networks require using special techniques for balancing the load of SIP servers as well as mitigating temporary overloads.  Manuscript profile
    • Open Access Article

      6 - COBIT A Suitable Framework for Measuring IT Governance in Organizations
      mojtaba safari mehdi ghazanfari Mohammad Fathian
      Abstract Nowadays IT is regarded as one of the most important and strategic section in development of business that has important role in increasing competition power of banks, on the other hand IT governance is an important section in governance of banks. Key role of I More
      Abstract Nowadays IT is regarded as one of the most important and strategic section in development of business that has important role in increasing competition power of banks, on the other hand IT governance is an important section in governance of banks. Key role of IT in growth and development of banks is very clear for senior managers and the important point is strategic integrity of IT and business in banks that is first goal of IT Governance. Results of researches show that average maturity of IT Governance in governmental banks is (1, 60). Level of maturity in effective usage of IT and adaptation of strategies of business with strategies of IT in governmental banks does not have good conditions. Kind of governance in governmental banks because bank industry is not competitive issue in Iran and applying mandatory rules and regulations on bank system of Iran has caused that governmental banks will not care to using electronic commerce in their business as an strategic tool. Manuscript profile
    • Open Access Article

      7 - Cross-layer Design for Congestion Control, Routing and Scheduling in Ad-hoc Wireless Networks with considering the Electrical Power of nodes
      Hooman Tahayori
      Abstract Ad hoc Wireless Networks, are networks formed by a collection of nodes through radio. In wireless networking environment, formidable challenges are presented. One important challenge is connection maintenance mechanism for power consumption. In this paper, a mu More
      Abstract Ad hoc Wireless Networks, are networks formed by a collection of nodes through radio. In wireless networking environment, formidable challenges are presented. One important challenge is connection maintenance mechanism for power consumption. In this paper, a multi-objective optimal design is considered for ad-hoc networks which address the electrical power of nodes effects on cross-layer congestion control, routing and scheduling. We first formulate the rate and scheduling constraints. In this way, the multi-commodity flow variables are used. Then, resource allocation in networks with fixed wireless channel and single-rate devices is formulated. Since the electrical power of nodes effects are included in the design problem, we formulate resource allocation as utility and cost function, together in a maximization problem with those constraints. By dual decomposition, the resource allocation problem vertically decomposes into three sub-problems: congestion control, routing and scheduling. These three sub-problems interact through congestion and link price. Simulation results are included to verify the effectiveness of the proposed approach. Manuscript profile
    • Open Access Article

      8 - Window-Shopping as an Effective Element to Improve Viral Marketing
      shahriyar mohammadi keyvan karimi
      E-Commerce is going to be remembered as one of the most important concepts of the twentieth and twenty-first centuries, for a very unique reason: it is the combination of , business, marketing, and design. As well as the others, marketing has a long history. However, ra More
      E-Commerce is going to be remembered as one of the most important concepts of the twentieth and twenty-first centuries, for a very unique reason: it is the combination of , business, marketing, and design. As well as the others, marketing has a long history. However, raising the Internet and e-commerce turns its utilization and practice into a modern way, as well as the others. One of the modern techniques of marketing is called viral marketing or so called Word-of-Mouth (WOM). This study is dedicated to reviewing the several strategies and key point of viral marketing. As the contribution, this study is focused on getting the window-shopping phenomenon to work, as a promoting factor on viral marketing strategies. In this paper, a case study on influence of window-shopping on a multi-stage customer decision process compared to that of WOM is been done as the empirical study deducing the applicability and competency of window-shopping. Manuscript profile
    • Open Access Article

      9 - Comprehensive Interactive Maturity Model for Mobile government
      fateme saghafi Mahdi Fasanghari
      Growing development of wireless technologies with a distinctive feature of this technology and the extent of spatial independence of its influence among the people have caused governments to move towards the use of this technology in their services and transactions. Mea More
      Growing development of wireless technologies with a distinctive feature of this technology and the extent of spatial independence of its influence among the people have caused governments to move towards the use of this technology in their services and transactions. Measuring the growth rate in the state offering these services require the m-government maturity model. The aim of this study is providing a model for m-government maturity with considering the strength and deleting the weaknesses of existing models. Scientists have developed many e-government maturity models but only two m-services models. In this article, e-government maturity models and the experiences of other countries are studied and analyzed with met synthesis method mobile. Then the results are validated by questionnaires and surveys with experts opinion and using appropriate tests. The results showed the Stages of this model are: Presence and release of information, interaction, transaction, vertical integration, horizontal integration, portal, Personalizing, optimization and innovation services. Three stages of transparency, accountability and democracy in this model, are parallel to the previous steps. Implementation of these three steps illuminates the roles of government in citizen’s interactions. While in the 8 previous section, citizens are considered as service recipient. Manuscript profile
    • Open Access Article

      10 - Design of Secure Communication System by Use of Synchronization of Chaotic Systems
      Mohammad nikkhoo masoud shafiee koorosh kiani
      In this paper, the concept of secure synchronization of chaotic systems using adaptive and robust techniques , has been discussed and then a new secure communication scheme, based on secure synchronization of a general class of chaotic systems called Generalized Lorenz More
      In this paper, the concept of secure synchronization of chaotic systems using adaptive and robust techniques , has been discussed and then a new secure communication scheme, based on secure synchronization of a general class of chaotic systems called Generalized Lorenz System, are presented. This communication scheme is combination of conventional cryptographic methods and chaotic modulation method. Analytical and simulation results using sine and voice signal and with unknown constant propagation delay between transmitter and receiver , have been presented. Also, robustness of this scheme against Gaussian channel noise with variance of , and security analysis of this communication system from brute-force viewpoint has been evaluated. By using this communication scheme, the very great key length has been obtained. Manuscript profile
    • Open Access Article

      11 - Personalized E-learning Environment Using Fuzzy Recommender System base on Combination of Learning Style and Cognitive Trait
      nafise saberi  
      Personalization needs to identify the learners’ preferences and their characteristics as an important part in any e-learning environment which without identify learners’ mental characteristics and their learning approaches, personalization cannot be possible. Whatever t More
      Personalization needs to identify the learners’ preferences and their characteristics as an important part in any e-learning environment which without identify learners’ mental characteristics and their learning approaches, personalization cannot be possible. Whatever this identifying process has been done more completely and more accurately, the learner model that based on it will be more reliable. Using the combination and relation of effective theories in learning approaches detection such as learning style and cognitive trait, have been used in this research. Also for reducing ambiguity in learners’ opinions and their feedbacks, have been used fuzzy logic. This study was conducted during one semester on some e-learning students in engineering field based on fuzzy recommender system in two phases. This recommender is part of Intelligent Tutoring System as prepared some recommendations based on learning style in first phase and on half of courses and in second phase and on remaining courses, prepared recommendations based on combination of two mentioneed theories. Learners’ ability have been monitored and evaluated based on fuzzy item response theory in all steps. Measures of Intelligent Tutoring System have been optimized after this combination that clarifies the presentation of accurate recommendations in appropriate time. The time of effective learning and amount of referee to tutor have decreased, learner’s and tutor’s view to e-learning that define such as learners’ success rate and the learner’s satisfaction have improved increasingly. Manuscript profile
    • Open Access Article

      12 - Performance Improvement of Automatic Language Identification Using GMM-SVM Method
      fahime ghasemian homayoun homayoun
      GMM is one of the most successful models in the field of automatic language identification. In this paper we have proposed a new model named adapted weight GMM (AW-GMM). This model is similar to GMM but the weights are determined using GMM-VSM LID system based on the po More
      GMM is one of the most successful models in the field of automatic language identification. In this paper we have proposed a new model named adapted weight GMM (AW-GMM). This model is similar to GMM but the weights are determined using GMM-VSM LID system based on the power of each component in discriminating one language from the others. Also considering the computational complexity of GMM-VSM, we have proposed a technique for constructing bigram sequences of components which could be used for higher sequence orders and decreases the complexity. Experiments on four languages of OGI corpus including English, Farsi, French and German have shown the effectiveness of proposed techniques. Manuscript profile
    • Open Access Article

      13 - Designing a Fuzzy Expert System for Selecting an Appropriate Contractor in Information Technology Outsourcing
      shaban elahi nadia kalantari alireza hassanzade sara shamsollahi
      Increment of complexity and costs of information technology systems have made many problems about infrastructure and manpower for organizations which have been decreased by the use of outsourcing. All organizations try to increase the success of outsourcing projects by More
      Increment of complexity and costs of information technology systems have made many problems about infrastructure and manpower for organizations which have been decreased by the use of outsourcing. All organizations try to increase the success of outsourcing projects by using different ways. One of the important reasons for failure of these projects especially in IT area- because of its major role in acquisition of competitive advantage- is selecting inappropriate contractor. In order to existence of different and contradictive criteria, this selection is complex. The purpose of this research is to determine important criteria and specify the weights of each criterion and finally design a fuzzy expert system for selecting the best contractor in IT outsourcing. The method of knowledge acquisition from experts-which are managers and experts of IT- is a questionnaire. Also in order to evaluate the validity of system, it was used in an IT company. The results show the favorable performance of contractor selection expert system. Manuscript profile
    • Open Access Article

      14 - Adaptive Probabilistic Flooding for Ad Hoc Networks
      fateme nourazar sabaei sabaei
      Broadcasting is one of the most fundamental operations in mobile ad hoc networks. Broadcasting serves as a building block in many routing protocols. The simplest approach for broadcasting is flooding. However, it generates many redundant messages that cause to loss of s More
      Broadcasting is one of the most fundamental operations in mobile ad hoc networks. Broadcasting serves as a building block in many routing protocols. The simplest approach for broadcasting is flooding. However, it generates many redundant messages that cause to loss of scarce resources such bandwidth and battery powers and it may lead to contention, collision and extreme decrease of network performance. Many schemes have been developed to improve the performance of flooding algorithm that are mainly classified into two basic approaches: deterministic and probabilistic approaches, of which the second is more considered. However, the existing schemes either increase the latency or decrease the reach ability of the algorithm. In this paper, we propose a new scheme to improve the performance of flooding algorithm. The basis of the new method is the probabilistic rebroadcasting based on local observations. In this method, the probability function of each node is adjusted dynamically based on local observations. Simulation results show that the new scheme considerably decrease average latency compared with similar existing schemes while maintaining the reach ability and saving messages. Manuscript profile
    • Open Access Article

      15 - Blind Correction of Lens Aberration Using Modified Zernike Moments
      kambiz rahbar karim faez
      The quality of the image formed by an optical system is reduced by aberrations. This paper points out and attempts to correct blindness of lens aberration. Therefore modified Zernike moments were introduced to present lens aberration model within which their coefficient More
      The quality of the image formed by an optical system is reduced by aberrations. This paper points out and attempts to correct blindness of lens aberration. Therefore modified Zernike moments were introduced to present lens aberration model within which their coefficients were estimated through poly-spectral analysis. The model parameters are divided into asymmetric and symmetric which are estimated through bi-coherence and tri-coherence respectively. The obtained precision compares favorably to the aberration given by state of the art ploy-spectral analysis and reaches a RMSE of 0.1 pixels. Manuscript profile
    • Open Access Article

      16 - Training MLP Neural Network in Images Compression by GSA Method
      maryam dehbashian hamid zahiri
      Image compression is one of the important research fields in image processing. Up to now, different methods are presented for image compression. Neural network is one of these methods that has represented its good performance in many applications. The usual method in tr More
      Image compression is one of the important research fields in image processing. Up to now, different methods are presented for image compression. Neural network is one of these methods that has represented its good performance in many applications. The usual method in training of neural networks is error back propagation method that its drawbacks are late convergence and stopping in points of local optimum. Lately, researchers apply heuristic algorithms in training of neural networks. This paper introduces a new training method based on the Gravitational Search Algorithm. Gravitational Search Algorithm is the latest and newest version of swarm intelligence optimization approaches. In this algorithm, the candidate answers in search space are masses that interact with each other by gravitational force and change their positions. Gently, the masses with better fitness obtain more mass and effect on other masses more. In this research, an MLP neural network by GSA method is trained for images compression. In order to efficiency evaluation of the presented compressor, we have compared its performance toward PSO and error back propagation methods in compression of four standard images. The final results show salient capability of the proposed method in training of MLP neural networks. Manuscript profile
    • Open Access Article

      17 - DWT-SVD based Semi-Blind Digital Image Watermarking
      danyali danyali fardin akhlaghi morteza makhlooghi
      With development of digital multimedia technology and rapid growth of the Internet, illegal copy and exchange of digital multimedia sources is also spread. In such environment, copyright protection plays an essential role. In this paper a new semi- blind image watermark More
      With development of digital multimedia technology and rapid growth of the Internet, illegal copy and exchange of digital multimedia sources is also spread. In such environment, copyright protection plays an essential role. In this paper a new semi- blind image watermarking algorithm for proof of ownership is proposed. At first, the original image is transformed to transform domain and the low frequency sub-band is selected to make reference image. Then, 1-level wavelet decomposition is applied to reference and grey-scale watermark images. Finally, the embedding is done by modifying the singular values of reference image’s sub-bands with the equivalent singular values of watermark’s sub-bands. In the proposed method, the reference image is needed during the extraction process, so it is called semi blind method. Robustness of the proposed method against various attacks that are applied to the watermarked image is investigated. The experimental results show that the proposed method is more robust than previous works against different attacks and the watermarked image looks visually identical to the original image. Manuscript profile
    • Open Access Article

      18 - Sliding Mode Controller for TCP Congestion Control
      roohollah barzamini masoud shafiee
      In this paper a new sliding mode controller for congestion problem in TCP networks has been proposed. Congestion occurs due to high network loads. It affects some aspects of network behavior. Congestion control prevents or reduces loads in bottlenecks and manages traffi More
      In this paper a new sliding mode controller for congestion problem in TCP networks has been proposed. Congestion occurs due to high network loads. It affects some aspects of network behavior. Congestion control prevents or reduces loads in bottlenecks and manages traffic. By using control theory, closed loop data transfer processing structure in computer networks can cope with the congestion problem. Sliding mode controller can provide robust behavior in presence of uncertainties and disturbances. In sliding mode control, states should be reached to a predefined surface (sliding surface), in a limited time and remain on the same surface over time. Moving on the sliding surface is independent of the uncertainties, so this technique is an approach of robust control. After applying controller to the system, stability of the system with controller has been studied by Lyapunov stability approach. Simulation results show the efficiency of the sliding mode controller in different scenarios. Manuscript profile
    • Open Access Article

      19 - A Managerial Examination of the Successful Parameters in the B2C Business
      shila mosammami mahmood moradi Asadollah Shahbahrami
      Currently, Electronic Commerce (EC) is a dynamic channel for trading. Because proposed simplicity and speed are competitive advantages which traditional organizations are unable to compete with them and thus not only individuals but also governments need its presence. A More
      Currently, Electronic Commerce (EC) is a dynamic channel for trading. Because proposed simplicity and speed are competitive advantages which traditional organizations are unable to compete with them and thus not only individuals but also governments need its presence. According to our study, the success of an EC system is arranged by 3 different categories named: Technical Characteristic View, Psychosocial Characteristic View and Managerial Characteristic View. But the managerial characteristic view is going to be argued in this research. Statistical population in this research has been 344 students of university of Guilan and their shopping behavior has been collected with offline and online questionnaires in a limited period of time. Consequently, extracted knowledge for further researches is argued in the following sections. Manuscript profile
    • Open Access Article

      20 - Analysis of the current situation ICT in rural areas of the country (using the Delphi Method)
      mohsen heidary
      Nowadays, information and communication technology as a central management tool and has been running world. The development of information and communication technology in a rural community needs accurate planning can facilitate rural development. In our country size of More
      Nowadays, information and communication technology as a central management tool and has been running world. The development of information and communication technology in a rural community needs accurate planning can facilitate rural development. In our country size of the country,s population and 40 percent of the rural population, especially in rural economic development role in national development and village lack access to facilities and urban services, which are necessary for the effective and appropriate strategies for the development and use of rural conditions of our ICT services. The main purpose of this study was to investigate the difficulties and challenges of the current status of ICTin rural area of the country. The Delphi technique has been used for this purpose. The study group consisted of 30 members expert in agricultural development, rural planning, information technology and is well aware of rural ICT. Based on results, the general problems of rural ICT development in the country are five categories, there are problems of infrastructure, education, support, social and cultural policies and planning. With regard ti the most difficult problems in the development of ICT infrastructure and rural education in the country , Therefore, policymakers and rural development is expected to overcome the problems of rural ICT infrastructure in the country to tale the necessary steps. The villagers are familiar with computers and internet courses will be held in the villages.  Manuscript profile
    • Open Access Article

      21 - A Novel Model for detecting intrusion with Mobile Agent and Game theory
      Amin Nezarat mehdi raja Gholamhossein Dastghaibyfard
      The proposed framework applies two game theoretic models for economic deployment of intrusion detection system (IDS). The first scheme models and analyzes the interaction behaviors of between an attacker and intrusion detection agent within a non-cooperative game, and t More
      The proposed framework applies two game theoretic models for economic deployment of intrusion detection system (IDS). The first scheme models and analyzes the interaction behaviors of between an attacker and intrusion detection agent within a non-cooperative game, and then the security risk value is derived from the mixed strategy Nash equilibrium. The second scheme uses the security risk value to compute the Shapley value of intrusion detection agent under the various threat levels. Therefore, the fair agent allocation creates a minimum set of IDS deployment costs. Numerical examples show that the network administrator can quantitatively evaluate the security risk of each intrusion detection agent and easily select the most effective IDS agent deployment to meet the various threat levels. Manuscript profile
    • Open Access Article

      22 - A proper method for the advertising email classification based on user’s profiles
      rahim hazratgholizadeh Mohammad Fathian
      In general, Spam is related to satisfy or not satisfy the client and isn’t related to the content of the client’s email. According to this definition, problems arise in the field of marketing and advertising for example, it is possible that some of the advertising email More
      In general, Spam is related to satisfy or not satisfy the client and isn’t related to the content of the client’s email. According to this definition, problems arise in the field of marketing and advertising for example, it is possible that some of the advertising emails become spam for some users, and not spam for others. To deal with this problem, many researchers design an anti-spam based on personal profiles. Normally machine learning methods for spam classification with good accuracy are used. However, there isn’t a unique successful way based on Electronic Commerce approach. In this paper, at first were prepared a new profile that can lead to better simulations of user’s behavior. Then we gave this profile with advertising emails to students and collected their answers. In continue, were examined famous methods for email classification. Finally, comparing different methods by criteria of data mining standards, it can be shown that neural network method has the best accuracy for various data sets. Manuscript profile
    • Open Access Article

      23 - Optimized Modeling for satisfaction in the relationship between a physician and patient based on machine learnin methods
      Fatemeh Saghafi mojtaba shadmehr Zainabolhoda Heshmati Hadi Veisi
      Health has always been one of the most important concerns of human. The goal in this research is to know what factors cause and affect patient satisfaction in the relationship between a physician and patient. Since this relationship is a form of healthcare service, the More
      Health has always been one of the most important concerns of human. The goal in this research is to know what factors cause and affect patient satisfaction in the relationship between a physician and patient. Since this relationship is a form of healthcare service, the SERVQUAL service quality assessment method has been used as a framework. However the questions have been reviewed based on the previous literature and the experts’ views, leading to a questionnaire designed for the healthcare domain. Data collection has been performed using the questionnaires on subjects selected amongst clients of Rhinoplasty Centers in Tehran. To analyze the data, three machine-learning approaches have been implemented namely Decision Tree, Support Vector Machine and Artificial Neural Networks. A number of possible factors affecting the patient-physician relationship have been used as input and patient satisfaction has been taken as output. Comparing the results of these three methods, Artificial Neural Networks method is shown to have better performance, which has therefore been used for prioritizing the effective factors in this relationship. The results indicate that reaching the information which the patient expects their physician to give is the most effective characteristic in patient satisfaction. The rank of gained features were compared with similar researches. The outcome was very similar and approved the results. Manuscript profile
    • Open Access Article

      24 - Improvement of mean shift tracker for tracking of target with variable photometric pattern
      Payman Moallem javad abbaspour alireza memarmoghada masoud kavoshtehrani
      The mean shift algorithm is one of the popular methods in visual tracking for non-rigid moving targets. Basically, it is able to locate repeatedly the central mode of a desirable target. Object representation in mean shift algorithm is based on its feature histogram wit More
      The mean shift algorithm is one of the popular methods in visual tracking for non-rigid moving targets. Basically, it is able to locate repeatedly the central mode of a desirable target. Object representation in mean shift algorithm is based on its feature histogram within a non-oriented individual kernel mask. Truly, adjusting of the kernel scale is the most critical challenge in this method. Up to now, no methods are presented that can perfectly as well as efficiently adjust and adapt the kernel scale during track when a target is resized. Another problem of mean shift tracking algorithm will be encountered whenever photometric properties of target texture changes. In order to solve these problems, this paper presents a modified mean shift tracking algorithm that is used a robust adaptive sizing technique. It can also cope with photometric changes of target template by adapting of its model in every frame of image sequence. In our proposed method, at first, the target window is adaptively resized with respect to spatio-temporal gradient powers of its pixel intensities in current frame and then mean shift algorithm is consequently applied to the resulted sizing window. Compared to standard mean shift algorithm, experimental results show that our proposed method, not only reduces center location errors of target, but also efficiently track it in the presence of changing illumination. Manuscript profile
    • Open Access Article

      25 - Multimedia teaching and its effects on learning and retention of English grammar
      Somayeh Ahari
      Increment of complexity and costs of information technology systems have made many problems about infrastructure and manpower for organizations which have been decreased by the use of outsourcing. All organizations try to increase the success of outsourcing projects by More
      Increment of complexity and costs of information technology systems have made many problems about infrastructure and manpower for organizations which have been decreased by the use of outsourcing. All organizations try to increase the success of outsourcing projects by using different ways. One of the important reasons for failure of these projects especially in IT area- because of its major role in acquisition of competitive advantage- is selecting inappropriate contractor. In order to existence of different and contradictive criteria, this selection is complex. The purpose of this research is to determine important criteria and specify the weights of each criterion and finally design a fuzzy expert system for selecting the best contractor in IT outsourcing. The method of knowledge acquisition from experts-which are managers and experts of IT- is a questionnaire. Also in order to evaluate the validity of system, it was used in an IT company. The results show the favorable performance of contractor selection expert system. Manuscript profile
    • Open Access Article

      26 - Extracting the Information Technology architecture of hospitals with approach of Implementing in Iran
      Atefehsadat Haghighathoseini atefe haghighathosseini hossein bobarshad hadi zare
      Nowadays, smart and fast services to patients and move on to the next generation hospital are the essential parts of the health field. The production of information technology architecture for hospital organizations is the foundation of accessible smart services and pro More
      Nowadays, smart and fast services to patients and move on to the next generation hospital are the essential parts of the health field. The production of information technology architecture for hospital organizations is the foundation of accessible smart services and providing services with more speed and higher quality than the traditional systems. In this paper, the target is to present an indigenous IT architecture based on important criteria and metrics in Iranian hospitals. The TOGAf architecture is used and then it is adopted and localized for Shariati hospital based on Iran’s indigenous conditions. For this aim, expert’s views and 134 questionnaire tools are applied and the results are analyzed through suitable statistical tests. So IT architecture is designed by consisting up a conceptual model of four input and four infrastructure layer. The results show that 111 items are verified to apply to the hospital among 145 possible items. The customization framework is called the Hospital IT Architecture. The components provide the framework approved in 8 layers and 11 components can be used in hospitals to implement enterprise architecture. The proposed architecture in 8 layers and 11 components are designed and it could be applied as an indigenous reference architecture for implementation of enterprise architecture in Iranian Hospital organizations. . Manuscript profile
    • Open Access Article

      27 - using clustering in AODV routing protocol for vehicular ad-hoc networks on highway scenario
      amin feyzi
      Vehicular Ad hoc networks are a subset of mobile Ad hoc networks in which vehicles are considered as network nodes. Their major difference is rapid mobility of nodes which causes the quick change of topology in this network. Quick changes in the topology of the network More
      Vehicular Ad hoc networks are a subset of mobile Ad hoc networks in which vehicles are considered as network nodes. Their major difference is rapid mobility of nodes which causes the quick change of topology in this network. Quick changes in the topology of the network are considered as a big challenge For routing in these networks, routing protocols must be robust and reliable. AODV Routing protocol is one of the known routing protocols in vehicular ad hoc networks. There are also some problems in applying this routing protocol on the vehicular ad hoc networks. The number of control massages increases with increasing the scale of the network and the number of nodes . One way to reduce the overhead in AODV routing protocol is clustering the nodes of the network. In this paper , the modified K-means algorithm has been used for clustering the nodes and particle swarm optimization has been used for selecting cluster head. The results of the proposed method improved normalized routing load and the increase of the packet delivery rate compared to AODV routing protocol. Manuscript profile
    • Open Access Article

      28 - Design a Model Of System Dynamics (SD) On The Policy Of Upgrading Indices Of Iran ICT Network
      hamid honarparvaran
      In the recent years, developments in the field of information and telecommunications are pursued by major changes in different areas of human life. Human has constantly employed the technology and track record of human life is abundant with innovation in information and More
      In the recent years, developments in the field of information and telecommunications are pursued by major changes in different areas of human life. Human has constantly employed the technology and track record of human life is abundant with innovation in information and communication technologies (ICT) which are called as new or high- technology. Therefore, ICT Development Indices (IDI) are remarkably important. This research purports to design a model of system dynamics (SD) on the policy of upgrading indices of Iran ICT network. Accordingly, after investigating the IDI and mining the indices, the causal loops and flowchart are depicted through using the SD (system dynamics). The results of simulation show that the behavior of studied key variables, after performing enhancing economic growth policies, improved the quality and quantity of ICT infrastructure, social indices, the culture of employing ICT, and the technical knowledge of ICT operators and the enhancing process has accelerated. Manuscript profile
    • Open Access Article

      29 - The Ranking Lack of Success Factors of Electronic Commerce in Technical Infrastructure in Tile and Ceramic Factories in Meybod County
      edehghani edehghani
      The electronic commerce is considered as one of the strategies to maintain competitive advantage in the companies. To have a successful electronic commerce in each business, some requirements and infrastructures are needed in advance. "Technical infrastructure" is as th More
      The electronic commerce is considered as one of the strategies to maintain competitive advantage in the companies. To have a successful electronic commerce in each business, some requirements and infrastructures are needed in advance. "Technical infrastructure" is as the most fundamental and most important infrastructure for electronic commerce. In this study, a comprehensive model is presented that introduces the effective factors on the success of electronic commerce in the technical infrastructure and shows that how by using importance-performance analysis model, it can be possible to identify the undesirable infrastructure factors that lead to unsuccessfulness of electronic commerce in each business and prioritize them for improvement and, technical infrastructure of electronic commerce and it’s capabilities which are necessary for successful implementation will be reviewed and technical infrastructure factor that cause failure of the e-commerce in the tile and ceramic industry of Maybod county, as one of the main centers of ceramic and tile production in the world and the biggest exporter of tiles in Iran, will be evaluated using the " analysis of importance- Performance" method. The results indicate that the internal infrastructure as well as the hardware and software available in theses firms, have acceptable condition and non-use of electronic payments, poor public infrastructure (Internet), lack of enough human force and undesirable capability of websites, are the main factors for failure of electronic commerce in term of technical infrastructure in these firms, respectively. [dm1]   [dm1]بازنگری شود Manuscript profile
    • Open Access Article

      30 - Multicast computer network routing using genetic algorithm and ant colony
      Mohammad Pourmahmood Aghababa
      Due to the growth and development of computer networks, the importance of the routing topic has been increased. The importance of the use of multicast networks is not negligible nowadays. Many of multimedia programs need to use a communication link to send a packet from More
      Due to the growth and development of computer networks, the importance of the routing topic has been increased. The importance of the use of multicast networks is not negligible nowadays. Many of multimedia programs need to use a communication link to send a packet from a sender to several receivers. To support such programs, there is a need to make an optimal multicast tree to indicate the optimal routes from the sending source to the corresponding sinks.  Providing an optimal tree for routing is a complicated problem. In this paper, we are looking forward a method for routing of multicast networks with considering some parameters such as the cost and delay. Also, this paper has emphasized the issue that every parameter in routing problem has different value for different packets. And in accordance to these parameters optimal routing multicast trees are proposed. To gain this end, the genetic algorithm and ant colony optimization approaches are adopted. The simulation results show that the presented algorithms are able to produce optimal multicast trees subject to the packets. Manuscript profile
    • Open Access Article

      31 - Investigating the Effects of Hardware Parameters on Power Consumptions in SPMV Algorithms on Graphics Processing Units (GPUs)
      Farshad Khunjush
      Although Sparse matrix-vector multiplication (SPMVs) algorithms are simple, they include important parts of Linear Algebra algorithms in Mathematics and Physics areas. As these algorithms can be run in parallel, Graphics Processing Units (GPUs) has been considered as on More
      Although Sparse matrix-vector multiplication (SPMVs) algorithms are simple, they include important parts of Linear Algebra algorithms in Mathematics and Physics areas. As these algorithms can be run in parallel, Graphics Processing Units (GPUs) has been considered as one of the best candidates to run these algorithms. In the recent years, power consumption has been considered as one of the metrics that should be taken into consideration in addition to performance.  In spite of this importance, to the best of our knowledge, studies on power consumptions in SPMVs algorithms on GPUs are scarce.  In this paper, we investigate the effects of hardware parameters on power consumptions in SPMV algorithms on GPUs. For this, we leverage the possibility of setting the GPU’s parameters to investigate the effects of these parameters on power consumptions. These configurations have been applied to different formats of Sparse Matrices, and the best parameters are selected for having the best performance per power metric. Therefore, as the results of this study the settings can be applied in running different Linear Algebra algorithms on GPUs to obtain the best performance per power. Manuscript profile
    • Open Access Article

      32 - Improving the directivity of the plasmonic sectoral horn nanoantenna using lens in its aperture
      Masumeh  Sharifi Najmeh  Nozhat Ehsan  Zareian-Jahromi
      Horn antennas can result in good impedance matching between the waveguide and free space due to the gradual increase in the aperture size. In this paper, a novel plasmonic sectoral horn nanoantenna based on using lens in the aperture is proposed. It is investigated that More
      Horn antennas can result in good impedance matching between the waveguide and free space due to the gradual increase in the aperture size. In this paper, a novel plasmonic sectoral horn nanoantenna based on using lens in the aperture is proposed. It is investigated that in addition to improvement of the directivity, the reflection coefficient is also reduced using the proper lens structure. The maximum directivity improvement is about 2 dBi compared to the structure without lens. Also, it is shown that the radiation pattern can be controlled by utilizing electro-optical material as the lens Manuscript profile
    • Open Access Article

      33 - Improvement of the bulk sensitivity and FoM of the plasmonic nanodipole antenna array
      Samira  Amiri Najmeh  Nozhat
      In this paper, the sensitivity of a plasmonic nanodipole antenna array for different materials of the metal nanodipole and substrate is calculated by changing the refractive index of the surrounding medium. The performance of our proposed array is studied at two wavelen More
      In this paper, the sensitivity of a plasmonic nanodipole antenna array for different materials of the metal nanodipole and substrate is calculated by changing the refractive index of the surrounding medium. The performance of our proposed array is studied at two wavelengths of 1310 and 1550 nm, the wavelengths of the second and third telecommunications windows. It is shown that by using the silver (Ag) nanodipole instead of the gold (Au) one, the bulk sensitivity of the nanostructure is improved. By replacing the substrate material from Si to SiO2, the sensitivity increases up to 1220 and 1150 nm/RIU at the wavelengths of 1310 and 1550 nm, respectively, that is very suitable for sensing applications. Moreover, the figure of merit (FoM) of the plasmonic sensor is calculated for both substrates and nanodipole materials. The maximum value of the FoM is obtained for the nanoantenna array with SiO2 substrate and Ag nanodipole and it is equal to 14.35. Furthermore, it is shown that by increasing the thickness of the nanodipole, the nanostructure sensitivity and FoM are enhanced Manuscript profile
    • Open Access Article

      34 - A design of Rectangular Waveguide TM11 to TE10 Mode Converter for S-band Applications
      Hamed  noroozy Hossein  Chabok Samane  Pakniyat
      A design of a compact, easy to fabricate and applicable structure rectangular waveguide TM11 to TE10 mode converter is presented in this paper. The design procedure of the proposed structure can be divided into two parts in sequence. The beginning one is dedicated to th More
      A design of a compact, easy to fabricate and applicable structure rectangular waveguide TM11 to TE10 mode converter is presented in this paper. The design procedure of the proposed structure can be divided into two parts in sequence. The beginning one is dedicated to the transformation from TM11 to TEM mode using a central conductor, while the second part is the transformation from TEM to TE10 mode using a dielectric loaded waveguide carrying out 180o phase shift. The proposed structure has the advantage of high efficiency of above 90 %, which are demonstrated in simulation results Manuscript profile
    • Open Access Article

      35 - Presenting a novel solution to choose a proper database for storing big data in national network services
      Mohammad Reza Ahmadi davood maleki ehsan arianyan
      The increasing development of tools producing data in different services and the need to store the results of large-scale processing results produced from various activities in the national information network services and also the data produced by the private sector an More
      The increasing development of tools producing data in different services and the need to store the results of large-scale processing results produced from various activities in the national information network services and also the data produced by the private sector and social networks, has made the migration to new databases solutions with appropriate features inevitable. With the expansion and change in the size and composition of data and the formation of big data, traditional practices and patterns do not meet new requirements. Therefore, the necessity of using information storage systems in new and scalable formats and models has become necessary. In this paper, the basic structural dimensions and different functions of both traditional databases and modern storage systems are reviewed and a new technical solution for migrating from traditional databases to modern databases is presented. Also, the basic features regarding the connection of traditional and modern databases for storing and processing data obtained from the comprehensive services of the national information network are presented and the parameters and capabilities of databases in the standard and Hadoop context are examined. In addition, as a practical example, a solution for combining traditional and modern databases has been presented, evaluated and compared using the BSC method. Moreover, it is shown that in different data sets with different data volumes, a combined use of both traditional and modern databases can be the most efficient solution. Manuscript profile
    • Open Access Article

      36 - computer security models and proposing a new perspective: A review paper
      Hadi sadjadi Reza Kalantari
      In this article first the use of computer security models and its benefits are discussed in a novel way. Then, while briefly introducing the space of computer security encounters in the form of ontology, for the first time, three perspectives in the study of patterns in More
      In this article first the use of computer security models and its benefits are discussed in a novel way. Then, while briefly introducing the space of computer security encounters in the form of ontology, for the first time, three perspectives in the study of patterns in this field have been identified and distinguished from each other. These three perspectives include the view of secure models, the view of security models, and the view of the framework and system to security models. The first and third perspectives are briefly explained and the second perspective is studied in detail from the perspective of the organization of patterns, including the five types of organization. The five types mentioned include software-based lifecycle organization, logical-level organization-based organization, threat-based classification-based organization, attack-based classification-based organization, and application-based organization. In this type of introduction of patterns, the audience acquires a comprehensive view of the discourse of computer security patterns and acquires the necessary knowledge to make better use of these patterns. Finally, the analysis and idea of this research is presented in the form of introducing a new type of organization in order to facilitate the proper use and addressing of patterns. In this idea, it is stated that the existing categories are mostly static and forward-looking and do not have the necessary dynamism and backwardness, and the idea of covering all stakeholders and security ontology can have this feature and, in addition, include agile patterns as well. . Manuscript profile
    • Open Access Article

      37 - Improving resource allocation in mobile edge computing using gray wolf and particle swarm optimization algorithms
      seyed ebrahim dashti saeid shabooei
      Mobile edge computing improves the experience of end users to achieve appropriate services and service quality. In this paper, the problem of improving resource allocation when offloading tasks based on mobile devices to edge servers in computing systems was investigate More
      Mobile edge computing improves the experience of end users to achieve appropriate services and service quality. In this paper, the problem of improving resource allocation when offloading tasks based on mobile devices to edge servers in computing systems was investigated. Some tasks are processed locally and some are offloaded to edge servers. The main issue is that the offloaded tasks for virtual machines in computing networks are properly scheduled to minimize computing time, service cost, computing network waste, and the maximum connection of a task with the network. In this paper, it was introduced using the hybrid algorithm of particle swarm and gray wolf to manage resource allocation and task scheduling to achieve an optimal result in edge computing networks. The comparison results show the improvement of waiting time and cost in the proposed approach. The results show that, on average, the proposed model has performed better by reducing the work time by 10% and increasing the use of resources by 16%. Manuscript profile
    • Open Access Article

      38 - Identifying and ranking factors affecting the digital transformation strategy in Iran's road freight transportation industry focusing on the Internet of Things and data analytics
      Mehran Ehteshami Mohammad Hasan Cheraghali Bita Tabrizian Maryam Teimourian sefidehkhan
      This research has been done with the aim of identifying and ranking the factors affecting the digital transformation strategy in Iran's road freight transportation industry, focusing on the Internet of Things and data analytics. After reviewing the literature, semi-stru More
      This research has been done with the aim of identifying and ranking the factors affecting the digital transformation strategy in Iran's road freight transportation industry, focusing on the Internet of Things and data analytics. After reviewing the literature, semi-structured interviews were conducted with 20 academic and road freight transportation industry experts in Iran, who were selected using the purposive sampling method and saturation principle. In the quantitative part, the opinions of 170 employees of this industry, who were selected based on Cochran's formula and stratified sampling method, were collected using a researcher-made questionnaire. Delphi technique, literature review and coding were used to analyze the data in the qualitative part. In the quantitative part, inferential statistics and SPSS and smartPLS software were used. Finally, 40 indicators were extracted in the form of 8 factors and ranking of indicators and affecting factors was done using factor analysis. The result of this research shows that the internal factors have the highest rank and software infrastructure, hardware infrastructure, economic, external factors, legal, cultural and penetration factor are in the next ranks respectively. Therefore, it is suggested that organizations consider their human resource empowerment program in line with the use of technology and digital tools. Manuscript profile
    • Open Access Article

      39 - Priority based Deployment of IoT Applications in Fog
      Masomeh Azimzadeh Ali Rezaee Somayyeh  Jafarali Jassbi MohammadMahdi Esnaashari
      Fog computing technology has emerged to respond to the need for modern IoT applications for low latency, high security, etc. On the other hand, the limitations of fog computing such as heterogeneity, distribution, and resource constraints make service management in this More
      Fog computing technology has emerged to respond to the need for modern IoT applications for low latency, high security, etc. On the other hand, the limitations of fog computing such as heterogeneity, distribution, and resource constraints make service management in this environment challenging. Intelligent service placement means placing application services on fog nodes to ensure their QoS and effective use of resources. Using communities to organize nodes for service placement is one of the approaches in this area, where communities are mainly created based on the connection density of nodes, and applications are placed based on a single-criteria prioritization approach. This leads to the creation of unbalanced communities and inefficient placement of applications. This paper presents a priority-based method for deploying applications in the fog environment. To this end, balanced communities are created and applications are placed in balanced communities based on a multi-criteria prioritization approach. This leads to optimal use of network capacities and increases in QoS. The simulation results show that the proposed method improves deadline by up to 22%, increases availability by about 12%, and increases resource utilization by up to 10%. Manuscript profile
    • Open Access Article

      40 - Persian Stance Detection Based On Multi-Classifier Fusion
      Mojgan Farhoodi Abbas Toloie Eshlaghy
      <p style="text-align: left;"><span style="font-size: 12.0pt; font-family: 'Times New Roman',serif; mso-fareast-font-family: 'Times New Roman'; mso-bidi-font-family: Nazanin; mso-ansi-language: EN-US; mso-fareast-language: EN-US; mso-bidi-language: FA;">Stance detection More
      <p style="text-align: left;"><span style="font-size: 12.0pt; font-family: 'Times New Roman',serif; mso-fareast-font-family: 'Times New Roman'; mso-bidi-font-family: Nazanin; mso-ansi-language: EN-US; mso-fareast-language: EN-US; mso-bidi-language: FA;">Stance detection (also known as stance classification, stance prediction, and stance analysis) is a recent research topic that has become an emerging paradigm of the importance of opinion-mining. The purpose of stance detection is to identify the author's viewpoint toward a specific target, which has become a key component of applications such as fake news detection, claim validation, argument search, etc. In this paper, we applied three approaches including machine learning, deep learning and transfer learning for Persian stance detection. Then we proposed a framework of multi-classifier fusion for getting final decision on output results. We used a weighted majority voting method based on the accuracy of the classifiers to combine their results. The experimental results showed the performance of the proposed multi-classifier fusion method is better than individual classifiers.</span></p> Manuscript profile
    • Open Access Article

      41 - Anomaly and Intrusion Detection through Datamining and Feature Selection using PSO Algorithm
      Fereidoon Rezaei Mohamad Ali Afshar Kazemi Mohammad Ali Keramati
      Today, considering technology development, increased use of Internet in businesses, and movement of business types from physical to virtual and internet, attacks and anomalies have also changed from physical to virtual. That is, instead of thieving a store or market, th More
      Today, considering technology development, increased use of Internet in businesses, and movement of business types from physical to virtual and internet, attacks and anomalies have also changed from physical to virtual. That is, instead of thieving a store or market, the individuals intrude the websites and virtual markets through cyberattacks and disrupt them. Detection of attacks and anomalies is one of the new challenges in promoting e-commerce technologies. Detecting anomalies of a network and the process of detecting destructive activities in e-commerce can be executed by analyzing the behavior of network traffic. Data mining systems/techniques are used extensively in intrusion detection systems (IDS) in order to detect anomalies. Reducing the size/dimensions of features plays an important role in intrusion detection since detecting anomalies, which are features of network traffic with high dimensions, is a time-consuming process. Choosing suitable and accurate features influences the speed of the proposed task/work analysis, resulting in an improved speed of detection. In this article, by using data mining algorithms such as J48 and PSO, we were able to significantly improve the accuracy of detecting anomalies and attacks. Manuscript profile
    • Open Access Article

      42 - Generalizing The Concept of Business Processes Structural Soundness from Classic Petri-nets to BPMN2.0 Process Models
      Yahya Poursoltani Mohammad Hassan Shirali-Shahreza S. Alireza Hashemi Golpayegani
      BPMN2.0 Standard is a modeling language, which can be understood and used by a wide range of users. However, because of its non-formal nature, models (designed using it) can be containing structural errors such as Deadlock (impossibility of executing some of process tas More
      BPMN2.0 Standard is a modeling language, which can be understood and used by a wide range of users. However, because of its non-formal nature, models (designed using it) can be containing structural errors such as Deadlock (impossibility of executing some of process tasks) and Livelock (infinite repetition of tasks) may be produced by using them. These semantic errors can create anomalies in the workflow of the organization. So far, some researches has been conducted on the validation of these process models and various solutions have been provided to discover some of these structural errors. The question that may be raised about these methods is whether it is possible to definitely guarantee the structural accuracy of a BPMN method model by using any of them? To answer this question, we need a comprehensive definition of a correct BPMN2.0 process model, based on which we can evaluate the comprehensiveness of validation methods and strongly make sure that the considered method can discover all of the structural errors of the process model. In this paper, based on concept of general process models and the concept of soundness (based on process models created using Petri nets) and the generalization of its properties, i.e. Liveness and Boundness to BPMN2.0 process models, a comprehensive definition for a correct (sound) BPMN2 process model provided. Then, the comprehensiveness of the suggested methods of some of the most important researches conducted has been evaluated based on it. This definition can be used as a measure for efficiency of BPMN validation methods. Manuscript profile
    • Open Access Article

      43 - A Novel Multi-Step Ahead Demand Forecasting Model Based on Deep Learning Techniques and Time Series Augmentation
      Hossein Abbasimehr Reza Paki
      In a business environment where there is fierce competition between companies, accurate demand forecasting is vital. If we collect customer demand data at discrete points in time, we obtain a demand time series. As a result, the demand forecasting problem can be formula More
      In a business environment where there is fierce competition between companies, accurate demand forecasting is vital. If we collect customer demand data at discrete points in time, we obtain a demand time series. As a result, the demand forecasting problem can be formulated as a time series forecasting task. In the context of time series forecasting, deep learning methods have demonstrated good accuracy in predicting complex time series. However, the excellent performance of these methods is dependent on the amount of data available. For this purpose, in this study, we propose to use time series augmentation techniques to improve the performance of deep learning methods. In this study, three new methods have been used to test the effectiveness of the proposed approach, which are: 1) Long short-term memory, 2) Convolutional network 3) Multihead self-attention mechanism. This study also uses a multi-step forecasting approach that makes it possible to predict several future points in a forecasting operation. The proposed method is applied to the actual demand data of a furniture company. The experimental results show that the proposed approach improves the forecasting accuracy of the methods used in most different prediction scenarios Manuscript profile
    • Open Access Article

      44 - Improving polarity identification in sentiment analysis using sarcasm detection and machine learning algorithms in Persian tweets
      Shaghayegh hajiabdollah Mitra Mirzarezaee Mir Mohsen Pedram
      Sentiment analysis is a branch of computer science and natural language processing that seeks to familiarize machines with human emotions and make them recognizable. Both sentiment analysis and sarcasm which is a sub-field of the former, seek to correctly identify the h More
      Sentiment analysis is a branch of computer science and natural language processing that seeks to familiarize machines with human emotions and make them recognizable. Both sentiment analysis and sarcasm which is a sub-field of the former, seek to correctly identify the hidden positive and negative emotions of the text. The use of sarcasm on social media, where criticism can be exercised within the context of humor, is quite common. Detection of sarcasm has a special effect on correctly recognizing the polarization of an opinion, and thus not only it can help the machine to understand the text better, but also makes it possible for the respective author to get his message across more clearly. For this purpose, 8000 Persian tweets that have emotional labels and examined for the presence or absence of sarcasm have been used. The innovation of this research is in extracting keywords from sarcastic sentences. In this research, a separate classifier has been trained to identify irony of the text. The output of this classifier is provided as an added feature to the text recognition classifier. In addition to other keywords extracted from the text, emoticons and hashtags have also been used as features. Naive Bayes, support vector machines, and neural networks were used as baseline classifiers, and finally the combination of classifiers was used to identify the feeling of the text. The results of this study show that identifying the irony in the text and using it to identify emotions increases the accuracy of the results. Manuscript profile
    • Open Access Article

      45 - Modeling and evaluation of RPL routing protocol by colored Petri nets
      Mohammad Pishdar Younes Seifi
      The Internet of Things (IoT) is a novel and widely used idea aimed at connecting objects through communication technologies. The problem of the prior technology adaptation has always been one of the most challenging issues in this area over the years. The Recognition of More
      The Internet of Things (IoT) is a novel and widely used idea aimed at connecting objects through communication technologies. The problem of the prior technology adaptation has always been one of the most challenging issues in this area over the years. The Recognition of Prior Learning (RPL) protocol has been proposed by scientists since 2012 as a solution for IoT routing. This protocol has been utilized by many researchers and hardware companies in the field of the mentioned technology. The present study evaluates RPL behavior from the perspective of the existence of stopping conditions, crossing multiple routes from a special route (loop conditions), and how it reacts to different inputs, while presenting a modular and readable model of this protocol. Manuscript profile
    • Open Access Article

      46 - The Effect of ICT Development on Economic and Political Risk: A Cross Country Study
      َAmir Hossein Mozayani Sajjad  Faraji Dizaji Hossein Karimi
      Today, information and communication technology has influenced all areas of human life, including economics and politics. In research on the effects of information and communication technology, its effects on economic risk and political risk have not been considered so More
      Today, information and communication technology has influenced all areas of human life, including economics and politics. In research on the effects of information and communication technology, its effects on economic risk and political risk have not been considered so far. Therefore, this study has examined the effect of ICT development on economic and political risk for three selected groups of developed, developing, and OPEC countries in the period 2007- 2019. The panel data method was used to estimate the model. Based on the model estimation results for all sample countries, the deployment of information and communication technology reduces economic and political risk; But the results are different for each group. As the development of information and communication technology in selected OPEC members and developing countries increases economic risk; But in developed countries, it reduces economic risk. Also, in OPEC member countries, no significant relationship was found between ICT and political risk, and in selected developed and developing countries, ICT increases political risk. Manuscript profile
    • Open Access Article

      47 - Increasing Total Throughput, Reducing Outage to Zero, and Reducing Power Consumption in a Cellular Network
      Mohsen Seyyedi Saravi Mohammadreza Binesh Marvasti Seyedeh Leili Mirtaheri Seyyed Amir Asghari
      Quality assurance of providing remote services in cellular networks necessitates attention to significant criteria such as throughput, power consumption, and interference in these networks. Accordingly, this paper presents a framework for optimizing these criteria by as More
      Quality assurance of providing remote services in cellular networks necessitates attention to significant criteria such as throughput, power consumption, and interference in these networks. Accordingly, this paper presents a framework for optimizing these criteria by assuming a limited transmission capacity for mobile nodes in a wireless cellular network as limitations in the transmission capacity often exist both in terms of hardware, battery limitations, and regulatory rules in the real world. In presenting this framework, a new idea was proposed once the existing methods were examined and their advantages and disadvantages were compared, respectively. After the formula was proved, the idea's simulation steps were performed via MATLAB. Present methods either increased the throughput by assuming unlimited transmission power or prevented some nodes from accessing the communication service. The simulation results showed that the proposed algorithm reduced the power consumption of mobile nodes in the network by a quarter in addition to increasing the throughput by 27%, and further operated in a way that no node would lose communication service Manuscript profile
    • Open Access Article

      48 - Synthesizing an image dataset for text detection and recognition in images
      Fatemeh Alimoradi Farzaneh Rahmani Leila Rabiei Mohammad Khansari Mojtaba Mazoochi
      Text detection in images is one of the most important sources for image recognition. Although many researches have been conducted on text detection and recognition and end-to-end models (models that provide detection and recognition in a single model) based on deep lear More
      Text detection in images is one of the most important sources for image recognition. Although many researches have been conducted on text detection and recognition and end-to-end models (models that provide detection and recognition in a single model) based on deep learning for languages such as English and Chinese, the main obstacle for developing such models for Persian language is the lack of a large training data set. In this paper, we design and build required tools for synthesizing a data set of scene text images with parameters such as color, size, font, and text rotation for Persian. These tools are used to generate a large still varied data set for training deep learning models. Due to considerations in synthesizing tools and resulted variety of texts, models do not depend on synthesis parameters and can be generalized. 7603 scene text images and 39660 cropped word images are synthesized as sample data set. The advantage of our method over real images is to synthesize any arbitrary number of images, without the need for manual annotations. As far as we know, this is the first open-source and large data set of scene text images for Persian language. Manuscript profile
    • Open Access Article

      49 - An Intelligent Pricing System for Cloud Services aims at Increasing Implementation Simplicity and Flexibility
      Mahboubeh Zandieh Sepideh Adabi Samaneh Yazdani
      Most of the previous pricing models for cloud resources which are defined based on auction suffer from high implementation complexity in real cloud environments. Therefore, the main challenge for researchers is to design dynamic pricing models that can achieve three goa More
      Most of the previous pricing models for cloud resources which are defined based on auction suffer from high implementation complexity in real cloud environments. Therefore, the main challenge for researchers is to design dynamic pricing models that can achieve three goals: 1) low computation complexity, 2) high accuracy, and 3) high implementation simplicity in real cloud environments. CMM (Cloud Market Maker) is one of the most popular dynamic pricing models that has two advantages of computation accuracy and the possibility to implement in the real cloud environments. This model calculates the bid price based on a linear function. In designing this linear function, the parameters: buyer’s urgency, number of competitors and number of opponents are considered. Despite the advantages of this pricing function, the importance ratio of the constructor parameters of it is considered the same in various market conditions. Ignoring this issue reduces both system flexibility and computation accuracy in tangible changes in the cloud market. Therefore, the authors of this paper focus on designing a new cloud market-aware intelligent pricing system (which developed in customer side of the market) to tackle the mentioned problem. At the same time, high implementation simplicity of the proposed system should be guaranteed. For this purpose, an agent-based intelligent pricing system by combining support vector machine (SVM) and hierarchical analysis process (AHP) techniques is proposed. Simulation results show the better performance of the proposed solution which is named as DPMA in comparison to CMM. Manuscript profile
    • Open Access Article

      50 - E-Government Service Supply Chain: Identifying Performance Evaluation Indicators (Case Study of e-Customs System in Iran)
      jalal zare Rosa Hendijani
      Today, many governments around the world are increasingly leveraging advances in information and communication technologies to provide electronic services to their citizens. But estimates indicate that e-government projects will fail miserably, both in part and in whole More
      Today, many governments around the world are increasingly leveraging advances in information and communication technologies to provide electronic services to their citizens. But estimates indicate that e-government projects will fail miserably, both in part and in whole. Incomplete formation and poor supply chain performance are the most important reasons for the failure of these projects by researchers. Due to the fact that few studies have been conducted in the field of e-government service supply chain and its performance evaluation indicators, this study has studied the e-government service supply chain in Iran by studying the e-customs system. Also, using SMART criteria and ELECTRE I technique, it has identified the performance evaluation indicators of this chain. This study shows that just as the principles of e-supply chain in the manufacturing sector have been proposed by researchers, this concept can be generalized to the public service sector. The results also show that unlike previous studies on performance appraisal indicators, e-government service supply chain indicators have significant differences with traditional service supply chains. Manuscript profile
    • Open Access Article

      51 - Three Dimensional Beamforming in Multi User Multi Antenna Cellular Networks with Randomly Distributed Users
      S. Mohammad Razavizadeh Nasim Mohammadi
      In this paper, problem of using the 3D beamforming method (3DBF) in a multi-input-multi-output cellular communication network (MIMO) is discussed. The network consists of a cell with multiple users, in which users are distributed based on the Poisson point (PPP) process More
      In this paper, problem of using the 3D beamforming method (3DBF) in a multi-input-multi-output cellular communication network (MIMO) is discussed. The network consists of a cell with multiple users, in which users are distributed based on the Poisson point (PPP) process at the cell area, which is closer to the conditions in a real mobile network. In this case, the number of users inside the cell and their location will be random. Depending on the distribution of users in the space and the difference in their distance from the base station, their elevation angles will also be different. Considering the downlink transmission and the zero-forcing (ZF) precoder in the base station, with the aim of eliminating intra cell interference, we evaluate and analyze the probability of coverage in the cell and then we obtain the best antenna tilt angle to achieve maximum probability of coverage. Using the analysis of numerical results, the accuracy of the calculations and the value of the optimal tilt angle of the antenna array are confirmed. Manuscript profile
    • Open Access Article

      52 - Stock market prediction using optimized grasshopper optimization algorithm and time series algorithms
      Vahid Safari dehnavi masoud shafiee
      Stock market prediction serves as an attractive and challenging field for researchers in financial markets. Many of the models used in stock market prediction are not able to predict accurately or these models require a large amount of input data, which increases the vo More
      Stock market prediction serves as an attractive and challenging field for researchers in financial markets. Many of the models used in stock market prediction are not able to predict accurately or these models require a large amount of input data, which increases the volume of networks and learning complexity, all of which ultimately reduce the accuracy of forecasting. This article proposes a method for forecasting the stock market that can effectively predict the stock market. In this paper, the past market price is used to reduce the volume of input data and this data is placed in a regressor model. Manuscript profile
    • Open Access Article

      53 - Power Efficient allocation in C-RAN with Multi access technology selection approach
      ALI ASGHAR ANSARI Mohsen Eslami Mohammad Javad Dehghani Saeideh Parsaei Fard
      : In this paper, we consider an uplink economy-efficient resource allocation in a multicellular virtual wireless network with a C-RAN architecture where a MNO interacts with a number of MVNOs with a predetermined business model. In each cell of this system, two types of More
      : In this paper, we consider an uplink economy-efficient resource allocation in a multicellular virtual wireless network with a C-RAN architecture where a MNO interacts with a number of MVNOs with a predetermined business model. In each cell of this system, two types of multiple access technologies, namely OFDMA and Massive MIMO, are available for MVNO at two different prices. In this setup, we propose a multi access technology selection approach (MATSA) with the objective to reduce operating costs and maximize the profit of the MVNOs subject to a set of constraints, and formulate this resource allocation problem with the new utility function. Due to the existence of continuous and binary variables in the formulated optimization problem and also the interference between cells in data rate functions, this optimization problem will be non-convex with very high computational complexity. To tackle this problem, by applying the complementary geometric programming (CGP) and the successive convex approximation (SCA), an effective two-step iterative algorithm is developed to convert the optimization problem into two sub problems with the aim to find optimum technology selection and power consumption parameters for each user in two steps, respectively. The simulation results demonstrate that our proposed approach (MATSA) with novel utility function is more efficient than the traditional approach, in terms of increasing total EE and reducing total power consumption. The simulation results illustrate that the profit of the MVNOs is enhanced more than 13% compared to that of the traditional approach. Manuscript profile
    • Open Access Article

      54 - Design and fabrication of the E-field probe for the measurement of the electromagnetic fields in 5G frequency band
      Reza Bahri Mahdi Fasanghari Ahmadreza Eskandari Vahid Yazdanian
      In this paper, a device for measuring the electric fields intensity in the environment is designed and presented in the 5G frequency band, including the frequency range of 3400 ~ 3600 MHz. This device, called the 5G electric probe, is realized by three orthogonal antenn More
      In this paper, a device for measuring the electric fields intensity in the environment is designed and presented in the 5G frequency band, including the frequency range of 3400 ~ 3600 MHz. This device, called the 5G electric probe, is realized by three orthogonal antennas, in connection to filter circuits and power detectors. The proposed antenna is a strip monopole antenna, and these orthogonal antennas can receive the electric fields in all directions uniformly and isotropically. The proposed filter is a coupled-line microstrip filter that has the ability to remove out-of-band signals. The proposed power detector is able to operate linearly over a wide dynamic range and convert the fields received from the antenna and filter sections to suitable DC voltages for digital processing. Finally, the designed 5G electric probe is fabricated and tested. The measurements confirm the proper operation of the probe in terms of dynamic range, accuracy, sensitivity, and the linearity and isotropicity of the received electric fields. Manuscript profile
    • Open Access Article

      55 - A review of the application of meta-heuristic algorithms in load balancing in cloud computing
      Mehdi Morsali Abolfazl Toroghi Haghighat Sasan Hosseinali-Zade
      By widespread use of cloud computing, the need to improve performance and reduce latency in the cloud increases. One of the problems of distributed environments, especially clouds, is unbalanced load which results in reducing speed and efficiency and increasing delay in More
      By widespread use of cloud computing, the need to improve performance and reduce latency in the cloud increases. One of the problems of distributed environments, especially clouds, is unbalanced load which results in reducing speed and efficiency and increasing delay in data storage and retrieval time. Various methods for load balancing in the cloud environment have been proposed, each of which has addressed the issue from its own perspective and has its advantages and disadvantages. In this research, we first provide some criteria for measuring load balance in the cloud and then examine the use of Metaheuristic methods in load balancing in the cloud environment. After introducing Metaheuristic load balancing methods, we have compared them based on the aforementioned criteria and discussed the advantages and disadvantages of each. Ant Colony Algorithms, Artificial Ant Colony, Bee Colony, Artificial Bee Colony, Bee Foraging Algorithm, Particle Swarm, Cat Swarm, Simulated Annealing, Genetic Algorithm, Tabu Search, Fish Swarm and Hybrid Algorithms and etc. examined in this research. Manuscript profile
    • Open Access Article

      56 - An approach to prioritize quality dimensions of based on cloud computing using Multiple Criteria Decision Making method
      Zahra Abbasi Somayeh Fatahi Mohammad Javad  Ershadi
      Today, quality is one of the most important factors in attracting customer satisfaction and loyalty to service organizations. Therefore, one of the main concerns of managers is to improve the quality of services. With the development of the Internet and the world of com More
      Today, quality is one of the most important factors in attracting customer satisfaction and loyalty to service organizations. Therefore, one of the main concerns of managers is to improve the quality of services. With the development of the Internet and the world of communications, a concept called cloud computing has expanded in the world of communications, which provides a new model for the supply, consumption and delivery of computing services. The purpose of this study is to make the optimal decision in choosing the appropriate cloud service according to the conditions of users so that they achieve the highest satisfaction. Fuzzy Delphi method, fuzzy hierarchical analysis method, fuzzy TOPSIS method and finally multi-criteria decision making method are the methods used in this research. The results of the fuzzy Delphi method show that the indicators of transparency, accessibility and reliability should be eliminated. The results of fuzzy hierarchical analysis identified the cost index as the most important index and the support index during demand as the least important index. According to the results of fuzzy TOPSIS based on the weights obtained from fuzzy hierarchical analysis, SAAS, IAAS and PAAS cloud services were ranked first to third, respectively. Using the SAAS service provides numerous benefits to employees and companies, such as reducing time and money spent on time-consuming tasks such as installing, managing, and upgrading software. Manuscript profile
    • Open Access Article

      57 - Data-driven Marketing in Digital Businesses from Dynamic Capabilities View
      Maede  Amini vlashani ayoub mohamadian Seyed Mohammadbagher Jafari
      Despite the enormous volume of data and the benefits it can bring to marketing activities, it is unclear how to use it in the literature, and very few studies have been conducted in this field. In this regard, this study uses dynamic capabilities view to identify the dy More
      Despite the enormous volume of data and the benefits it can bring to marketing activities, it is unclear how to use it in the literature, and very few studies have been conducted in this field. In this regard, this study uses dynamic capabilities view to identify the dynamic capabilities of data-driven marketing to focus on data in the development of marketing strategies, make effective decisions, and improve efficiency in marketing processes and operations. This research has been carried out in a qualitative method utilizing the content analysis strategy and interviews with specialists. The subjects were 18 professionals in the field of data analytics and marketing. They were selected by the purposeful sampling method. This study provides data-driven marketing dynamic capabilities, including; Ability to absorb marketing data, aggregate and analyze marketing data, the ability to data-driven decision-making, the ability to improve the data-driven experience with the customer, data-driven innovation, networking, agility, and data-driven transformation. The results of this study can be a step towards developing the theory of dynamic capabilities in the field of marketing with a data-driven approach. Therefore, it can be used in training and creating new organizational capabilities to use big data in the marketing activities of organizations, to develop and improve data-driven products and services, and improve the customer experience Manuscript profile
    • Open Access Article

      58 - A comprehensive survey on the influence maximization problem in social networks
      mohsen taherinia mahdi Esmaeili Behrooz Minaei
      With the incredible development of social networks, many marketers have exploited the opportunities, and attempt to find influential people within online social networks to influence other people. This problem is known as the Influence Maximization Problem. Efficiency a More
      With the incredible development of social networks, many marketers have exploited the opportunities, and attempt to find influential people within online social networks to influence other people. This problem is known as the Influence Maximization Problem. Efficiency and effectiveness are two important criteria in the production and analysis of influence maximization algorithms. Some of researchers improved these two issues by exploiting the communities’ structure as a very useful feature of social networks. This paper aims to provide a comprehensive review of the state of the art algorithms of the influence maximization problem with special emphasis on the community detection-based approaches Manuscript profile
    • Open Access Article

      59 - Presenting the ICT Policies Implementation Model of the 6th Development Using the Neural Network Method
      Nazila Mohammadi Gholamreza   Memarzadeh Tehran Sedigheh Tootian Isfahani
      It is inevitable to properly manage the implementation of information and communication technology policies in a planned way in order to improve the country's position in the fields of science and technology. The purpose of this research is to provide a model of the eff More
      It is inevitable to properly manage the implementation of information and communication technology policies in a planned way in order to improve the country's position in the fields of science and technology. The purpose of this research is to provide a model of the effective factors on the implementation of Iran's ICT policies with the help of the neural network technique and based on Giddens' constructive theory. From the point of view of conducting it, this research is of a survey type and based on the purpose, it is of an applied type because it is trying to use the results of the research in the Ministry of Communication and Information Technology and the Iranian Telecommunications Company. Data collection is based on library and field method. The tool for collecting information is research researcher-made questionnaire. The statistical population of the research is information and communication technology experts at the headquarters of Iran Telecommunication Company (810 people), of which 260 people were randomly selected as a sample based on Cochran's formula. MATLAB software was used for data analysis. According to the findings, the best combination for development is when all input variables are considered at the same time, and the worst case is when the infrastructure development variable is ignored, and the most important based on network sensitivity analysis is related to infrastructure development and the least important is related to content supply. Manuscript profile
    • Open Access Article

      60 - Community Detection in Bipartite Networks Using HellRank Centrality Measure
      Ali Khosrozadeh Ali Movaghar Mohammad Mehdi Gilanian Sadeghi Hamidreza Mahyar
      Community structure is a common and important feature in many complex networks, including bipartite networks. In recent years, community detection has received attention in many fields and many methods have been proposed for this purpose, but the heavy consumption of ti More
      Community structure is a common and important feature in many complex networks, including bipartite networks. In recent years, community detection has received attention in many fields and many methods have been proposed for this purpose, but the heavy consumption of time in some methods limits their use in large-scale networks. There are methods with lower time complexity, but they are mostly non-deterministic, which greatly reduces their applicability in the real world. The usual approach that is adopted to community detection in bipartite networks is to first construct a unipartite projection of the network and then communities detect in that projection using methods related to unipartite networks, but these projections inherently lose information. In this paper, based on the bipartite modularity measure that quantifies the strength of partitions in bipartite networks and using the HellRank centrality measure, a quick and deterministic method for community detection from bipartite networks directly and without need to projection, proposed. The proposed method is inspired by the voting process in election activities in the social society and simulates it. Manuscript profile
    • Open Access Article

      61 - Test case Selection based on Test-Driven Development
      Zohreh Mafi mirian mirian
      Test-Driven Development (TDD) is one of the test-first software production methods in which the production of each component of the code begins with writing the test case. This method has been noticed due to many advantages, including the readable, regular and short cod More
      Test-Driven Development (TDD) is one of the test-first software production methods in which the production of each component of the code begins with writing the test case. This method has been noticed due to many advantages, including the readable, regular and short code, as well as increasing the quality, productivity and reliability, and the possibility of regression testing due to the creation of a comprehensive set of unit tests. The large number of unit test cases produced in this method is considered as a strong point in order to increase the reliability of the code, however, the repeated execution of test cases increases the duration of the regression testing in this method. The purpose of this article is to present an algorithm for selecting test cases to reduce the time of the regression test in TDD method. So far, various ideas have been proposed to select test cases and reduce the regression test time. Most of these ideas are based on programming language and software production methods. The idea presented in this article is based on the program difference method and the nature of the TDD method. In this method, meaningful semantic and structural connections are created between unit tests and code blocks, and the test case selection is done based on these relationships. Manuscript profile
    • Open Access Article

      62 - Application identification through intelligent traffic classification
      Shaghayegh Naderi
      Traffic classification and analysis is one of the big challenges in the field of data mining and machine learning, which plays an important role in providing security, quality assurance and network management. Today, a large amount of transmission traffic in the network More
      Traffic classification and analysis is one of the big challenges in the field of data mining and machine learning, which plays an important role in providing security, quality assurance and network management. Today, a large amount of transmission traffic in the network is encrypted by secure communication protocols such as HTTPS. Encrypted traffic reduces the possibility of monitoring and detecting suspicious and malicious traffic in communication infrastructures (instead of increased security and privacy of the user) and its classification is a difficult task without decoding network communications, because the payload information is lost, and only the header information (which is encrypted too in new versions of network communication protocols such as TLS1.03) is accessible. Therefore, the old approaches of traffic analysis, such as various methods based on port and payload, have lost their efficiency, and new approaches based on artificial intelligence and machine learning are used in cryptographic traffic analysis. In this article, after reviewing the traffic analysis methods, an operational architectural framework for intelligent traffic analysis and classification has been designed. Then, an intelligent model for Traffic Classification and Application Identification is presented and evaluated using machine learning methods on Kaggle141. The obtained results show that the random forest model, in addition to high interpretability compared to deep learning methods, has been able to provide high accuracy in traffic classification compared to other machine learning methods. Finally, tips and suggestions about using machine learning methods in the operational field of traffic classification have been provided. Manuscript profile
    • Open Access Article

      63 - Improving Opinion Aspect Extraction Using Domain Knowledge and Term Graph
      Mohammadreza Shams Ahmad  Baraani Mahdi Hashemi
      With the advancement of technology, analyzing and assessing user opinions, as well as determining the user's attitude toward various aspects, have become a challenging and crucial issue. Opinion mining is the process of recognizing people’s attitudes from textual commen More
      With the advancement of technology, analyzing and assessing user opinions, as well as determining the user's attitude toward various aspects, have become a challenging and crucial issue. Opinion mining is the process of recognizing people’s attitudes from textual comments at three different levels: document-level, sentence-level, and aspect-level. Aspect-based Opinion mining analyzes people’s viewpoints on various aspects of a subject. The most important subtask of aspect-based opinion mining is aspect extraction, which is addressed in this paper. Most previous methods suggest a solution that requires labeled data or extensive language resources to extract aspects from the corpus, which can be time consuming and costly to prepare. In this paper, we propose an unsupervised approach for aspect extraction that uses topic modeling and the Word2vec technique to integrate semantic information and domain knowledge based on term graph. The evaluation results show that the proposed method not only outperforms previous methods in terms of aspect extraction accuracy, but also automates all steps and thus eliminates the need for user intervention. Furthermore, because it is not reliant on language resources, it can be used in a wide range of languages. Manuscript profile
    • Open Access Article

      64 - Emerging technologies in future generations of high performance computing: introduction, taxonomy and future research directions
      mahmood nematollahzadeh ehsan arianyan Masoud Hayeri Khyavi niloofar gholipoor abdollah sepahvand
      Due to the rapid growth of science and technology, their need for high performance computing is increasing everyday. So far, the majority of the world's high performance computing needs have been based on conventional silicon-based technologies, but the end of the age o More
      Due to the rapid growth of science and technology, their need for high performance computing is increasing everyday. So far, the majority of the world's high performance computing needs have been based on conventional silicon-based technologies, but the end of the age of silicon-based technologies is near, and this fact has led scientists to use emerging technologies such as quantum computing, bio computing, optical computing and similar technologies. Although some of these technologies are not new and the initial introduction of some of them dates back to some decades ago, but due to the attractiveness of classical silicon-based computing and the speed of development in it, have been neglected to date. However, recently, these technologies have begun to be used to build scalable high performance computers. In this paper, we introduce these technologies and how they participate in the field of high performance computing, their current and future status, and their challenges. Also, the taxonomy related to each of these technologies from the computational point of view as well as their research topics are presented, which can be utilized for future research in this field. Manuscript profile
    • Open Access Article

      65 - Valuation of digital services in Iran: Empirical proof for Google and Instagram
      FARHAD ASGHARI ESTIAR AMIR MOHAMMADZADEH Ebrahim  Abbasi
      This article surveys the fundamental value of digital platforms, such as Instagram and Google. Despite the commutable nature of digital technologies, it is challenging to value digital services, given that the usage is free of charge. Applying the methodology of discret More
      This article surveys the fundamental value of digital platforms, such as Instagram and Google. Despite the commutable nature of digital technologies, it is challenging to value digital services, given that the usage is free of charge. Applying the methodology of discrete choice experiments, we estimated the value of digital free goods. For the first time in the literature, we obtained data for the willingness-to-pay and willingness-to-accept, together with socio-economic variables. The customer’s valuation of free digital services is on average, for Google, 4.9m Rial per week and Instagram, 3.27. This paper corroborates that Instagram and Google have an intrinsic value to users, despite the fact that the service of the digital platforms is free of charge. This is the beginning of the valuation of free services such as Shad, Rubika, Zarebeen, etc. in Iran, which has played a significant role in the communication industry since the beginning of the Covid-19 pandemic, and in the discussion of the national information network, the market value of the provider companies will be very important. Manuscript profile
    • Open Access Article

      66 - A novel metaheuristic algorithm and its discrete form for influence maximizing in complex networks
      Vahideh Sahargahi Vahid Majidnezhad Saeed  Taghavi Afshord Bagher Jafari
      In light of the No Free Lunch (NFL) theorem, which establishes the inherent limitations of meta-heuristic algorithms in universally efficient problem solving, the ongoing quest for enhanced diversity and efficiency prompts the introduction of novel algorithms each year. More
      In light of the No Free Lunch (NFL) theorem, which establishes the inherent limitations of meta-heuristic algorithms in universally efficient problem solving, the ongoing quest for enhanced diversity and efficiency prompts the introduction of novel algorithms each year. This research presents the IWOGSA meta-heuristic algorithm, a pioneering solution tailored for addressing continuous optimization challenges. IWOGSA ingeniously amalgamates principles from both the invasive weed optimization algorithm and the gravitational search algorithm, capitalizing on their synergies. The algorithm's key innovation lies in its dual-pronged sample generation strategy: a subset of samples follows a normal distribution, while others emulate the planetary motion-inspired velocities and accelerations from the gravitational search algorithm. Furthermore, a selective transfer of certain samples from distinct classes contributes to the evolution of succeeding generations. Expanding upon this foundation, a discrete variant of IWOGSA, termed DIWOGSA, emerges to tackle discrete optimization problems. The efficacy of DIWOGSA is demonstrated through its application to the intricate influence maximization problem. DIWOGSA distinguishes itself with an astute population initialization strategy and the integration of a local search operator to expedite convergence. Empirical validation encompasses a rigorous assessment of IWOGSA against established benchmark functions, composite functions, and real-world engineering structural design problems. Remarkably, the IWOGSA algorithm asserts its superiority, eclipsing both contemporary and traditional methods. This ascendancy is statistically affirmed through the utilization of the Friedman test rank, positioning IWOGSA as the premier choice. Also, DIWOGSA algorithm is evaluated by considering different networks for influence maximization problem, and it shows acceptable results in terms of influence and computational time in comparison to conventional algorithms. Manuscript profile
    • Open Access Article

      67 - Fuzzy Multicore Clustering of Big Data in the Hadoop Map Reduce Framework
      Seyed Omid Azarkasb Seyed Hossein Khasteh Mostafa  Amiri
      A logical solution to consider the overlap of clusters is assigning a set of membership degrees to each data point. Fuzzy clustering, due to its reduced partitions and decreased search space, generally incurs lower computational overhead and easily handles ambiguous, no More
      A logical solution to consider the overlap of clusters is assigning a set of membership degrees to each data point. Fuzzy clustering, due to its reduced partitions and decreased search space, generally incurs lower computational overhead and easily handles ambiguous, noisy, and outlier data. Thus, fuzzy clustering is considered an advanced clustering method. However, fuzzy clustering methods often struggle with non-linear data relationships. This paper proposes a method based on feasible ideas that utilizes multicore learning within the Hadoop map reduce framework to identify inseparable linear clusters in complex big data structures. The multicore learning model is capable of capturing complex relationships among data, while Hadoop enables us to interact with a logical cluster of processing and data storage nodes instead of interacting with individual operating systems and processors. In summary, the paper presents the modeling of non-linear data relationships using multicore learning, determination of appropriate values for fuzzy parameterization and feasibility, and the provision of an algorithm within the Hadoop map reduce model. The experiments were conducted on one of the commonly used datasets from the UCI Machine Learning Repository, as well as on the implemented CloudSim dataset simulator, and satisfactory results were obtained.According to published studies, the UCI Machine Learning Repository is suitable for regression and clustering purposes in analyzing large-scale datasets, while the CloudSim dataset is specifically designed for simulating cloud computing scenarios, calculating time delays, and task scheduling. Manuscript profile
    • Open Access Article

      68 - Information Technology, Strategy Implementation, Information Systems, Strategic Planning, Input-Process-Outcome Framework
      Mona Jami Pour Shahnaz Akbari Emami Safora Firozeh
      IT strategy is a key factor in improving the process and performance of companies in using IT. Hence, many companies have a strategic planning process, but only a few succeed in implementing strategies efficiently. Therefore, the purpose of this study is to design a pro More
      IT strategy is a key factor in improving the process and performance of companies in using IT. Hence, many companies have a strategic planning process, but only a few succeed in implementing strategies efficiently. Therefore, the purpose of this study is to design a process framework for implementing IT strategy; To identify the drivers, processes and consequences of implementing IT strategy in organizations. The present study is a qualitative research with a phenomenological approach and in order to collect data, open and in-depth interviews were conducted with 10 experts in the field of IT using theoretical sampling. The results of the analysis show that the inputs under the headings of IT strategy implementation include environmental requirements of business continuity, structural-system cohesion, technology-oriented human resources, IT strategic leadership, skill requirements and common values. The second aspect of the IT strategy implementation model includes the dimensions of IT program monitoring and communication, structural appropriateness, development of support policies, budgeting and resource allocation, appropriate training, and the development of supportive culture. Finally, the implications of implementing an IT strategy, including those related to finance, internal process, customer, and growth and learning, were categorized. Manuscript profile
    • Open Access Article

      69 - Automatic Lung Diseases Identification using Discrete Cosine Transform-based Features in Radiography Images
      Shamim Yousefi Samad Najjar-Ghabel
      The use of raw radiography results in lung disease identification has not acceptable performance. Machine learning can help identify diseases more accurately. Extensive studies were performed in classical and deep learning-based disease identification, but these methods More
      The use of raw radiography results in lung disease identification has not acceptable performance. Machine learning can help identify diseases more accurately. Extensive studies were performed in classical and deep learning-based disease identification, but these methods do not have acceptable accuracy and efficiency or require high learning data. In this paper, a new method is presented for automatic interstitial lung disease identification on radiography images to address these challenges. In the first step, patient information is removed from the images; the remaining pixels are standardized for more precise processing. In the second step, the reliability of the proposed method is improved by Radon transform, extra data is removed using the Top-hat filter, and the detection rate is increased by Discrete Wavelet Transform and Discrete Cosine Transform. Then, the number of final features is reduced with Locality Sensitive Discriminant Analysis. The processed images are divided into learning and test categories in the third step to create different models using learning data. Finally, the best model is selected using test data. Simulation results on the NIH dataset show that the decision tree provides the most accurate model by improving the harmonic mean of sensitivity and accuracy by up to 1.09times compared to similar approaches. Manuscript profile
    • Open Access Article

      70 - The main components of evaluating the credibility of users according to organizational goals in the life cycle of big data
      Sogand Dehghan shahriyar mohammadi rojiar pirmohamadiani
      Social networks have become one of the most important decision-making factors in organizations due to the speed of publishing events and the large amount of information. For this reason, they are one of the most important factors in the decision-making process of inform More
      Social networks have become one of the most important decision-making factors in organizations due to the speed of publishing events and the large amount of information. For this reason, they are one of the most important factors in the decision-making process of information validity. The accuracy, reliability and value of the information are clarified by these networks. For this purpose, it is possible to check the validity of information with the features of these networks at the three levels of user, content and event. Checking the user level is the most reliable level in this field, because a valid user usually publishes valid content. Despite the importance of this topic and the various researches conducted in this field, important components in the process of evaluating the validity of social network information have received less attention. Hence, this research identifies, collects and examines the related components with the narrative method that it does on 30 important and original articles in this field. Usually, the articles in this field are comparable from three dimensions to the description of credit analysis approaches, content topic detection, feature selection methods. Therefore, these dimensions have been investigated and divided. In the end, an initial framework was presented focusing on evaluating the credibility of users as information sources. This article is a suitable guide for calculating the amount of credit of users in the decision-making process. Manuscript profile
    • Open Access Article

      71 - Predicting the workload of virtual machines in order to reduce energy consumption in cloud data centers using the combination of deep learning models
      Zeinab Khodaverdian Hossein Sadr Mojdeh Nazari Soleimandarabi Seyed Ahmad Edalatpanah
      Cloud computing service models are growing rapidly, and inefficient use of resources in cloud data centers leads to high energy consumption and increased costs. Plans of resource allocation aiming to reduce energy consumption in cloud data centers has been conducted usi More
      Cloud computing service models are growing rapidly, and inefficient use of resources in cloud data centers leads to high energy consumption and increased costs. Plans of resource allocation aiming to reduce energy consumption in cloud data centers has been conducted using live migration of Virtual Machines (VMs) and their consolidation into the small number of Physical Machines (PMs). However, the selection of the appropriate VM for migration is an important challenge. To solve this issue, VMs can be classified according to the pattern of user requests into Delay-sensitive (Interactive) or Delay-Insensitive classes, and thereafter suitable VMs can be selected for migration. This is possible by virtual machine workload prediction .In fact, workload predicting and predicting analysis is a pre-migration process of a virtual machine. In this paper, In order to classification of VMs in the Microsoft Azure cloud service, a hybrid model based on Convolution Neural Network (CNN) and Gated Recurrent Unit (GRU) is proposed. Microsoft Azure Dataset is a labeled dataset and the workload of virtual machines in this dataset are in two labeled Delay-sensitive (Interactive) or Delay-Insensitive. But the distribution of samples in this dataset is unbalanced. In fact, many samples are in the Delay-Insensitive class. Therefore, Random Over-Sampling (ROS) method is used in this paper to overcome this challenge. Based on the empirical results, the proposed model obtained an accuracy of 94.42 which clearly demonstrates the superiority of our proposed model compared to other existing models. Manuscript profile
    • Open Access Article

      72 - The effect of Internet of Things (IOT) implementation on the Rail Freight Industry; A futures study approach
      Noureddin Taraz Monfared علی شایان ali rajabzadeh ghotri
      The rail freight industry in Iran has been faced several challenges which affected its performance. Notwithstanding that Internet of Things leverage is rapidly increasing in railway industries-as an experienced solution in other countries-, Iran’s rail freight industry More
      The rail freight industry in Iran has been faced several challenges which affected its performance. Notwithstanding that Internet of Things leverage is rapidly increasing in railway industries-as an experienced solution in other countries-, Iran’s rail freight industry has not been involved in, yet. Related research and experiment has not been identified in Iran, as well. The aim of this survey is to identify the effects of the implementation of Internet of Things in the Rail Freight Industry in Iran. To gather the data, the Delphi method was selected, and the Snowball technique was used for organizing a Panel including twenty experts. To evaluate the outcomes, IQR, Binomial tests, and Mean were calculated. Several statements were identified and there was broad consensus on most of them, which approved that their implementation affects the Iranian rail freight industry, but in different ranks. Finally, the results formed in the Balanced Scorecard’s format. The internal business process has been affected more than the other aspects by the approved statements. Eleven recognized elements are affected in different ranks, including Internal Business Process, Financial, Learning, and Growth, Customers. The Financial perspective remarked as least consensus and the Internal Business Process has received the extreme consensus. The research outcomes can be used to improve the strategic planning of the Iranian rail freight industry by applying the achievements of information technology in practice. Manuscript profile
    • Open Access Article

      73 - Design of Distributed Consensus Controller for Leader-Follower Singular Multi-Agent Systems in the Presence of Sensor Fault
      Saeid Poormirzaee Hamidreza Ahmadzadeh masoud Shafiee
      In this paper, the problem of sensor fault estimation and designing of a distributed fault-tolerant controller is investigated to guarantee the leader-follower consensus for homogeneous singular multi-agent systems for the first time. First, a novel augmented model for More
      In this paper, the problem of sensor fault estimation and designing of a distributed fault-tolerant controller is investigated to guarantee the leader-follower consensus for homogeneous singular multi-agent systems for the first time. First, a novel augmented model for the system is proposed. It is shown that the proposed model is regular and impulse-free unlike some similar research works. Based on this model, the state and sensor fault of the system are simultaneously estimated by designing a distributed singular observer. The proposed observer also has the ability to estimate time-varying sensor fault. Then, a distributed controller is designed to guarantee the leader-follower consensus using estimation of state and sensor fault. The sufficeient conditions to ensure the stability of the observer dynamic and consensus dynamic are drived in terms of linear matrix inequalities (LMIs). The gains of observer and controller are computed by solving these conditions with MATLAB software. Finally, the validation and efficiency of the proposed control system for the leader-follower consensus of singular multi-agent systems exposed to sensor faults is illustrated by computer simulations. The simulation results show that the propsed control strategy deeling to the sensor falut in the singular multi-agent systems is effective. Manuscript profile
    • Open Access Article

      74 - Indigenous model of commercialization of complex technologies based on partnership in the ICT sector
      Mahdi Fardinia Fatemeh saghafi Jalal Haghighat Monfared
      The ICT industry is one of the most complex industries with superior technologies. Sustainable growth of companies in this industry is ensured by successful commercialization, which due to the complexity of the field, knowledge sharing between companies is essential. A More
      The ICT industry is one of the most complex industries with superior technologies. Sustainable growth of companies in this industry is ensured by successful commercialization, which due to the complexity of the field, knowledge sharing between companies is essential. A careful review of the literature showed that there is no model for how to succeed in commercialization and its relationship with interorganizational participation in the ICT sector. Therefore, this issue was determined as the goal of the research. By reviewing the research background; Factors affecting the success of participation-based commercialization including internal and external drivers, participation, esources, dynamic capabilities, executive mechanisms and extraction performance were drawn in the form of a conceptual model. Then, by studying multy-case study, technological projects of ICT Research Institute, including (Antivirus Padvish, Native search engine project, SOC native operations center, communication equipment POTN) and content analysis, main and secondary themes of the model were extracted. Then, using a focus group consisting of experts, the results were validated and themes (propositions) were confirmed. The relationship between the components was also confirmed in the panel of experts. The final model is a combination of these factors that, according to the indigenous experiences of Iran, has led to the success of commercialization and can be the basis for policy-making for successful knowledge-based products. Manuscript profile
    • Open Access Article

      75 - Improving energy consumption in the Internet of Things using the Krill Herd optimization algorithm and mobile sink
      Shayesteh Tabatabaei
      Internet of Things (IoT) technology involves a large number of sensor nodes that generate large amounts of data. Optimal energy consumption of sensor nodes is a major challenge in this type of network. Clustering sensor nodes into separate categories and exchanging info More
      Internet of Things (IoT) technology involves a large number of sensor nodes that generate large amounts of data. Optimal energy consumption of sensor nodes is a major challenge in this type of network. Clustering sensor nodes into separate categories and exchanging information through headers is one way to improve energy consumption. This paper introduces a new clustering-based routing protocol called KHCMSBA. The proposed protocol biologically uses fast and efficient search features inspired by the Krill Herd optimization algorithm based on krill feeding behavior to cluster the sensor nodes. The proposed protocol also uses a mobile well to prevent the hot spot problem. The clustering process at the base station is performed by a centralized control algorithm that is aware of the energy levels and position of the sensor nodes. Unlike protocols in other research, KHCMSBA considers a realistic energy model in the grid that is tested in the Opnet simulator and the results are compared with AFSRP (Artifical Fish Swarm Routing ProtocolThe simulation results show better performance of the proposed method in terms of energy consumption by 12.71%, throughput rate by 14.22%, end-to-end delay by 76.07%, signal-to-noise ratio by 82.82%. 46% compared to the AFSRP protocol Manuscript profile
    • Open Access Article

      76 - Liquidity Risk Prediction Using News Sentiment Analysis
      hamed mirashk albadvi albadvi mehrdad kargari Mohammad Ali Rastegar Mohammad Talebi
      One of the main problems of Iranian banks is the lack of risk management process with a forward-looking approach, and one of the most important risks in banks is liquidity risk. Therefore, predicting liquidity risk has become an important issue for banks. Conventional m More
      One of the main problems of Iranian banks is the lack of risk management process with a forward-looking approach, and one of the most important risks in banks is liquidity risk. Therefore, predicting liquidity risk has become an important issue for banks. Conventional methods of measuring liquidity risk are complex, time-consuming and expensive, which makes its prediction far from possible. Predicting liquidity risk at the right time can prevent serious problems or crises in the bank. In this study, it has been tried to provide an innovative solution for predicting bank liquidity risk and leading scenarios by using the approach of news sentiment analysis. The news sentiment analysis approach about one of the Iranian banks has been used in order to identify dynamic and effective qualitative factors in liquidity risk to provide a simpler and more efficient method for predicting the liquidity risk trend. The proposed method provides practical scenarios for real-world banking risk decision makers. The obtained liquidity risk scenarios are evaluated in comparison with the scenarios occurring in the bank according to the guidelines of the Basel Committee and the opinion of banking experts to ensure the correctness of the predictions and its alignment. The result of periodically evaluating the studied scenarios indicates a relatively high accuracy. The accuracy of prediction in possible scenarios derived from the Basel Committee is 95.5% and in scenarios derived from experts' opinions, 75%. Manuscript profile
    • Open Access Article

      77 - Identify and analyze decision points and key players in procurement process in the EPC companies
      Seyedeh Motahareh  Hosseini Mohammad aghdasim
      Correct and timely decisions have a significant impact on the performance and achievement of the company's goals. In other words, business process management depends on making and implementing rational decisions. By increasing the integration of information systems in o More
      Correct and timely decisions have a significant impact on the performance and achievement of the company's goals. In other words, business process management depends on making and implementing rational decisions. By increasing the integration of information systems in organizations and using tools such as process mining, a platform is provided for the use of data analysis approaches and better analysis of decisions, and managers can act in agile decision making. Selecting a supplier in the process of purchasing in complex projects is one of the basic and key decisions that affect the quality, cost and performance of the project. In this article, with a process perspective, the decision points in the purchasing process in a complex construction project in an EPC company have been discovered and the key players in the implementation of the process have been identified and analyzed through social network analysis. The results of this research have led to the investigation of decision points in the process, the performance of decision points and the identification of key people in decision making, which can be used to improve the company's future performance. Manuscript profile
    • Open Access Article

      78 - Fake Websites Detection Improvement Using Multi-Layer Artificial Neural Network Classifier with Ant Lion Optimizer Algorithm
      Farhang Padidaran Moghaddam Mahshid Sadeghi B.
      In phishing attacks, a fake site is forged from the main site, which looks very similar to the original one. To direct users to these sites, Phishers or online thieves usually put fake links in emails and send them to their victims, and try to deceive users with social More
      In phishing attacks, a fake site is forged from the main site, which looks very similar to the original one. To direct users to these sites, Phishers or online thieves usually put fake links in emails and send them to their victims, and try to deceive users with social engineering methods and persuade them to click on fake links. Phishing attacks have significant financial losses, and most attacks focus on banks and financial gateways. Machine learning methods are an effective way to detect phishing attacks, but this is subject to selecting the optimal feature. Feature selection allows only important features to be considered as learning input and reduces the detection error of phishing attacks. In the proposed method, a multilayer artificial neural network classifier is used to reduce the detection error of phishing attacks, the feature selection phase is performed by the ant lion optimization (ALO) algorithm. Evaluations and experiments on the Rami dataset, which is related to phishing, show that the proposed method has an accuracy of about 98.53% and has less error than the multilayer artificial neural network. The proposed method is more accurate in detecting phishing attacks than BPNN, SVM, NB, C4.5, RF, and kNN learning methods with feature selection mechanism by PSO algorithm. Manuscript profile
    • Open Access Article

      79 - Extracting Innovation Strategies and Requirements for Telecommunication Companies: Case Study of Telecommunications Infrastructure Company
      Alireza Esmaeeli Alireza Asgharian Takavash Bahreini Nasrin Dastranj Mahshid Ghaffarzadegan Kolsoum Abbasi-Shahkooh Mandana Farzaneh Homeyra Moghadami
      The purpose of this study is to identify effective innovation strategies for a governmental organization and mission-oriented in the field of communication and information technology. The characteristics of the company under study are: governmental organization, having More
      The purpose of this study is to identify effective innovation strategies for a governmental organization and mission-oriented in the field of communication and information technology. The characteristics of the company under study are: governmental organization, having a monopoly market, scattered actions in the field of innovation, having managers interested in organizational innovation and has a clear and up-to-date strategy and structure. In this paper, innovation strategies were collected using comparative studies of similar international companies. Then by using the method of thematic analysis on the data obtained from semi-structured interviews, the strengths and weaknesses related to the innovation were identified. By matching these two categories of information, a number of appropriate strategies have been proposed and their implementation considerations have been expressed based on the specific characteristics of this company. Accordingly, suggestions for future research of this company have been presented to identify appropriate methods of implementation of organizational innovation in similar circumstances. Manuscript profile
    • Open Access Article

      80 - Using limited memory to store the most recent action in XCS learning classifier systems in maze problems
      Ali Yousefi kambiz badie mohamad mehdi ebadzade Arash  Sharifi
      Nowadays, learning classifier systems have received attention in various applications in robotics, such as sensory robots, humanoid robots, intelligent rescue and rescue systems, and control of physical robots in discrete and continuous environments. Usually, the combin More
      Nowadays, learning classifier systems have received attention in various applications in robotics, such as sensory robots, humanoid robots, intelligent rescue and rescue systems, and control of physical robots in discrete and continuous environments. Usually, the combination of an evolutionary algorithm or intuitive methods with a learning process is used to search the space of existing rules in assigning the appropriate action of a category. The important challenge to increase the speed and accuracy in reaching the goal in the maze problems is to use and choose the action that the stimulus is placed on the right path instead of repeatedly hitting the surrounding obstacles. For this purpose, in this article, an intelligent learning classifier algorithm of accuracy-based learning classifier systems (XCS) based on limited memory is used, which according to the input and actions applied to the environment and the reaction of the stimulus, the rules It is optimally identified and added as a new classifier set to the accuracy-based learning classifier systems (XCS) algorithm in the next steps. Among the achievements of this method, it can be based on reducing the number of necessary steps and increasing the speed of reaching the stimulus to the target compared to the accuracy-based learning classifier systems (XCS) algorithm. Manuscript profile
    • Open Access Article

      81 - Design and implementation of a survival model for patients with melanoma based on data mining algorithms
      farinaz sanaei Seyed Abdollah  Amin Mousavi Abbas Toloie Eshlaghy ali rajabzadeh ghotri
      Background/Purpose: Among the most commonly diagnosed cancers, melanoma is the second leading cause of cancer-related death. A growing number of people are becoming victims of melanoma. Melanoma is also the most malignant and rare form of skin cancer. Advanced cases of More
      Background/Purpose: Among the most commonly diagnosed cancers, melanoma is the second leading cause of cancer-related death. A growing number of people are becoming victims of melanoma. Melanoma is also the most malignant and rare form of skin cancer. Advanced cases of the disease may cause death due to the spread of the disease to internal organs. The National Cancer Institute reported that approximately 99,780 people were diagnosed with melanoma in 2022, and approximately 7,650 died. Therefore, this study aims to develop an optimization algorithm for predicting melanoma patients' survival. Methodology: This applied research was a descriptive-analytical and retrospective study. The study population included patients with melanoma cancer identified from the National Cancer Research Center at Shahid Beheshti University between 2008 and 2013, with a follow-up period of five years. An optimization model was selected for melanoma survival prognosis based on the evaluation metrics of data mining algorithms. Findings: A neural network algorithm, a Naïve Bayes network, a Bayesian network, a combination of decision tree and Naïve Bayes network, logistic regression, J48, and ID3 were selected as the models used in the national database. Statistically, the studied neural network outperformed other selected algorithms in all evaluation metrics. Conclusion: The results of the present study showed that the neural network with a value of 0.97 has optimal performance in terms of reliability. Therefore, the predictive model of melanoma survival showed a better performance both in terms of discrimination power and reliability. Therefore, this algorithm was proposed as a melanoma survival prediction model. Manuscript profile
    • Open Access Article

      82 - An Intrusion Detection System based on Deep Learning for CAN Bus
      Fatemeh Asghariyan Mohsen Raji
      In recent years, with the advancement of automotive electronics and the development of modern vehicles with the help of embedded systems and portable equipment, in-vehicle networks such as the controller area network (CAN) have faced new security risks. Since the CAN bu More
      In recent years, with the advancement of automotive electronics and the development of modern vehicles with the help of embedded systems and portable equipment, in-vehicle networks such as the controller area network (CAN) have faced new security risks. Since the CAN bus lacks security systems such as authentication and encryption to deal with cyber-attacks, the need for an intrusion detection system to detect attacks on the CAN bus seem to be very necessary. In this paper, a deep adversarial neural network (DACNN) is proposed to detect various types of security intrusions in CAN buses. For this purpose, the DACNN method, which is an extension of the CNN method using adversarial learning, detects intrusion in three stages; In the first stage, CNN acts as a feature descriptor and the main features are extracted, and in the second stage, the discriminating classifier classifies these features and finally, the intrusion is detected using the adversarial learning. In order to show the efficiency of the proposed method, a real open source dataset was used in which the CAN network traffic on a real vehicle during message injection attacks is recorded on a real vehicle. The obtained results show that the proposed method performs better than other machine learning methods in terms of false negative rate and error rate, which is less than 0.1% for DoS and drive gear forgery attack and RPM forgery attack while this rate is less than 0.5% for fuzzy attack. Manuscript profile
    • Open Access Article

      83 - The framework of the national macro plan for transparency and information release based on the grounded theory method
      Mahdi Azizi MehmanDoost Mohammad Reza Hosseini reza taghipour Mojtaba Mazoochi
      The purpose of this research is to present the framework of the national plan for transparency and information release. The research employs an integrated approach (qualitative and quantitative) and grounded theory as its research methodology. In the qualitative part، w More
      The purpose of this research is to present the framework of the national plan for transparency and information release. The research employs an integrated approach (qualitative and quantitative) and grounded theory as its research methodology. In the qualitative part، with an in-depth and exploratory review of upstream laws and documents، models، theories، plans، and white papers of different countries related to transparency and information release، data analysis was done until theoretical saturation through three stages of open، axial، and selective coding. To acquire the dimensions، components، and subcomponents of this framework، 129 concepts were extracted from 620 primary codes، which were reduced to 593 secondary codes by removing the duplicated elements. Finally، 24 subcategories were placed under the five main components based on the paradigm model. In the quantitative section، the results of the analysis of the questionnaire indicated that، from a validity standpoint، the total value of the questionnaire، in different dimensions، was between 0.87 and 0.92، and the reliability coefficient was between 0.73 and 0.78. Based on data analysis، the establishment of a supranational management institution for transparency and information release، the precise determination of exceptions، network governance، demanding transparency، adherence to frameworks، maximum disclosure and support for legitimate disclosure، and the establishment of a data governance center are among the subcategories emphasized in this framework. Manuscript profile
    • Open Access Article

      84 - Drivers, Obstacles and consequences of digital entrepreneurship in Iran's road freight transportation industry
      Azam sadtat Mortazavi kahangi Parviz Saketi Javad Mehrabi
      The purpose of this research is to identify the drivers, obstacles and consequences of digital entrepreneurship in Iran's road freight transportation industry. The statistical society of this research in the qualitative part was made up of 20 experts in this field who w More
      The purpose of this research is to identify the drivers, obstacles and consequences of digital entrepreneurship in Iran's road freight transportation industry. The statistical society of this research in the qualitative part was made up of 20 experts in this field who were selected using theoretical saturation. In the quantitative part, using Cochran's formula and cluster sampling method, 170 employees of this industry were selected as samples. In order to collect data, a semi-structured interview was used in the qualitative part and a researcher-made questionnaire was used in the quantitative part, whose validity and reliability were checked and confirmed. In the data analysis, systematic literature review and coding and Maxqda software were used in the qualitative part, and inferential statistics and SPSS and Lisrel software were used in the quantitative part. Finally, 9 indicators in 4 driver factors, 11 indicators in 3 obstacle factors and 55 indicators in 8 consequence categories were extracted and prioritized using factor analysis. The result of this research shows that the political component is a priority as a driver and political obstacles are a priority as an obstacle. Therefore, the role of the government in this field is very important. Manuscript profile
    • Open Access Article

      85 - WSTMOS: A Method For Optimizing Throughput, Energy, And Latency In Cloud Workflow Scheduling
      Arash Ghorbannia Delavar Reza Akraminejad sahar mozafari
      Application of cloud computing in different datacenters around the world has led to generation of more co2 gas. In addition, energy and throughput are the two most important issues in this field. This paper has presented an energy and throughput-aware algorithm for sche More
      Application of cloud computing in different datacenters around the world has led to generation of more co2 gas. In addition, energy and throughput are the two most important issues in this field. This paper has presented an energy and throughput-aware algorithm for scheduling of compressed-instance workflows in things-internet by cluster processing in cloud. A method is presented for scheduling cloud workflows with aim of optimizing energy, throughput, and latency. In the proposed method, time and energy consumption has been improved in comparison to previous methods by creating distance parameters, clustering inputs, and considering real execution time. In WSTMOS method by considering special parameters and real execution time, we managed to reach the optimized objective function. Moreover, in the proposed method parameter of time distance of tasks to virtual machines for reduction of number of migration in virtual machines was applied. In WSTMOS method by organizing the workflow inputs to low, medium and heavy groups and also by distributing appropriate load on more suitable servers for processors threshold, we accomplished to optimize energy and cost. Energy consumption was reduced by 4.8 percent while the cost was cut down by 4.4 percent using this method in comparison to studied method. Finally, average delay time, power and workload are optimized in comparison to previous methods. Manuscript profile
    • Open Access Article

      86 - Presenting a web recommender system for user nose pages using DBSCAN clustering algorithm and machine learning SVM method.
      reza molaee fard Mohammad mosleh
      Recommender systems can predict future user requests and then generate a list of the user's favorite pages. In other words, recommender systems can obtain an accurate profile of users' behavior and predict the page that the user will choose in the next move, which can s More
      Recommender systems can predict future user requests and then generate a list of the user's favorite pages. In other words, recommender systems can obtain an accurate profile of users' behavior and predict the page that the user will choose in the next move, which can solve the problem of the cold start of the system and improve the quality of the search. In this research, a new method is presented in order to improve recommender systems in the field of the web, which uses the DBSCAN clustering algorithm to cluster data, and this algorithm obtained an efficiency score of 99%. Then, using the Page rank algorithm, the user's favorite pages are weighted. Then, using the SVM method, we categorize the data and give the user a combined recommender system to generate predictions, and finally, this recommender system will provide the user with a list of pages that may be of interest to the user. The evaluation of the results of the research indicated that the use of this proposed method can achieve a score of 95% in the recall section and a score of 99% in the accuracy section, which proves that this recommender system can reach more than 90%. It detects the user's intended pages correctly and solves the weaknesses of other previous systems to a large extent. Manuscript profile
    • Open Access Article

      87 - Face recognition and Liveness Detection Based on Speech Recognition for Electronical Authentication
      Ahmad dolatkhah Behnam Dorostkar Yaghouti raheb hashempour
      As technology develops, institutions and organizations provide many services electronically and intelligently over the Internet. The police, as an institution that provides services to people and other institutions, aims to make its services smarter. Various electronic More
      As technology develops, institutions and organizations provide many services electronically and intelligently over the Internet. The police, as an institution that provides services to people and other institutions, aims to make its services smarter. Various electronic and intelligent systems have been offered in this regard. Because these systems lack authentication, many services that can be provided online require a visit to +10 police stations. Budget and equipment limitations for face-to-face responses, limitations of the police force and their focus on essential issues, a lack of service offices in villages and a limited number of service offices in cities, and the growing demand for online services, especially in crisis situations like Corona disease, electronic authentication is becoming increasingly important. This article reviews electronic authentication and its necessity, liveness detection methods and face recognition which are two of the most important technologies in this area. In the following, we present an efficient method of face recognition using deep learning models for face matching, as well as an interactive liveness detection method based on Persian speech recognition. A final section of the paper presents the results of testing these models on relevant data from this field. Manuscript profile
    • Open Access Article

      88 - Estimating the Value of Digital Economy Core Spillover in Iran
      Niloufar Moradhassel Bita Mohebikhah
      Background and Purpose: In most of the studies, the direct effects of the ICT sector have been discussed, but the indirect effects (spillover) and how to measure them have not been addressed. This issue is on the agenda of this article. For this purpose, while determini More
      Background and Purpose: In most of the studies, the direct effects of the ICT sector have been discussed, but the indirect effects (spillover) and how to measure them have not been addressed. This issue is on the agenda of this article. For this purpose, while determining the territory of the digital economy, the gross value of the core of the country's digital economy has been estimated. Methodology: In this article, using the Solow growth model, the spillover effects of the core of the digital economy (ICT) have been estimated for the period of 2002-2019. Findings: The results imply that in the period under review, according to the elasticity of labor productivity relative to the share of net capital formation of the ICT sector in the national economy (about 0.3), the spillover effects of the digital economy core have increased from 210 thousand billion Rials in 2015 to 279 thousand billion of Rials in 2019. Manuscript profile
    • Open Access Article

      89 - Applying deep learning for improving the results of sentiment analysis of Persian comments of Online retail stores
      faezeh forootan Mohammad Rabiei
      چكيده انگليسيThe retail market industry is one of the industries that affects the economies of countries, the life of which depends on the level of satisfaction and trust of customers to buy from these markets. In such a situation, the retail market industry is trying t More
      چكيده انگليسيThe retail market industry is one of the industries that affects the economies of countries, the life of which depends on the level of satisfaction and trust of customers to buy from these markets. In such a situation, the retail market industry is trying to provide conditions for customer feedback and interaction with retailers based on web pages and online platforms. Because the analysis of published opinions play a role not only in determining customer satisfaction but also in improving products. Therefore, in recent years, sentiment analysis techniques in order to analyze and summarize opinions, has been considered by researchers in various fields, especially the retail market industry. Manuscript profile
    • Open Access Article

      90 - Artefacts and Producers Mapping of Iran's Artificial Intelligence Ecosystem based on Transformational Levels
      hamed ojaghi Iman Zohoorian Nadali Fatemeh Soleymani Roozbahani
      As an emerging technological field, artificial intelligence has received increasing attention from companies and governments. The development of artificial intelligence both at business and country levels depends on knowing the current situation. This paper identifies t More
      As an emerging technological field, artificial intelligence has received increasing attention from companies and governments. The development of artificial intelligence both at business and country levels depends on knowing the current situation. This paper identifies the artifacts and producers presented in this field and maps them to transformational levels. Products/services and producers are achieved through capabilities provided by artificial intelligence. Then, based on the classification methodology and meta-characteristics, the transformational levels of the artifacts of Iran's artificial intelligence ecosystem have been extracted. 562 products/services were identified, which were offered by 112 companies. Machine vision and natural language processing have been at the top of the technologies used, with 44 and 27 percent of the products allocated to them, respectively. Artifacts and producers were classified into seven transformative levels: individual, organization, industry, electronic chip/hardware, society, platform, code/algorithm/library, and infrastructure. Iran's artificial intelligence productions have not grown in a balanced way. The three levels of platform, code/algorithm/library, and infrastructure as the main generator of other artificial intelligence products/services have had the lowest amount of production. It is suggested that a specialized marketplace for the supply of artificial intelligence application programming interfaces should be put on the agenda to stimulate the formation of the ecosystem. Manuscript profile
    • Open Access Article

      91 - Noor Analysis: A Benchmark Dataset for Evaluating Morphological Analysis Engines
      Huda Al-Shohayyeb Behrooz Minaei Mohammad Ebrahim Shenassa Sayyed Ali Hossayni
      The Arabic language has a very rich and complex morphology, which is very useful for the analysis of the Arabic language, especially in traditional Arabic texts such as historical and religious texts, and helps in understanding the meaning of the texts. In the morpholog More
      The Arabic language has a very rich and complex morphology, which is very useful for the analysis of the Arabic language, especially in traditional Arabic texts such as historical and religious texts, and helps in understanding the meaning of the texts. In the morphological data set, the variety of labels and the number of data samples helps to evaluate the morphological methods, in this research, the morphological dataset that we present includes about 22, 3690 words from the book of Sharia alـIslam, which have been labeled by experts, and this dataset is the largest in terms of volume and The variety of labels is superior to other data provided for Arabic morphological analysis. To evaluate the data, we applied the Farasa system to the texts and we report the annotation quality through four evaluation on the Farasa system. Manuscript profile
    • Open Access Article

      92 - Improvement of intrusion detection system on Industrial Internet of Things based on deep learning using metaheuristic algorithms
      mohammadreza zeraatkarmoghaddam majid ghayori
      Due to the increasing use of industrial Internet of Things (IIoT) systems, one of the most widely used security mechanisms is intrusion detection system (IDS) in the IIoT. In these systems, deep learning techniques are increasingly used to detect attacks, anomalies or i More
      Due to the increasing use of industrial Internet of Things (IIoT) systems, one of the most widely used security mechanisms is intrusion detection system (IDS) in the IIoT. In these systems, deep learning techniques are increasingly used to detect attacks, anomalies or intrusions. In deep learning, the most important challenge for training neural networks is determining the hyperparameters in these networks. To overcome this challenge, we have presented a hybrid approach to automate hyperparameter tuning in deep learning architecture by eliminating the human factor. In this article, an IDS in IIoT based on convolutional neural networks (CNN) and recurrent neural network based on short-term memory (LSTM) using metaheuristic algorithms of particle swarm optimization (PSO) and Whale (WOA) is used. This system uses a hybrid method based on neural networks and metaheuristic algorithms to improve neural network performance and increase detection rate and reduce neural network training time. In our method, considering the PSO-WOA algorithm, the hyperparameters of the neural network are determined automatically without the intervention of human agent. In this paper, UNSW-NB15 dataset is used for training and testing. In this research, the PSO-WOA algorithm has use optimized the hyperparameters of the neural network by limiting the search space, and the CNN-LSTM neural network has been trained with this the determined hyperparameters. The results of the implementation indicate that in addition to automating the determination of hyperparameters of the neural network, the detection rate of are method improve 98.5, which is a good improvement compared to other methods. Manuscript profile
    • Open Access Article

      93 - Improving the load balancing in Cloud computing using a rapid SFL algorithm (R-SFLA)
      Kiomars Salimi Mahdi Mollamotalebi
      Nowadays, Cloud computing has many applications due to various services. On the other hand, due to rapid growth, resource constraints and final costs, Cloud computing faces with several challenges such as load balancing. The purpose of load balancing is management of th More
      Nowadays, Cloud computing has many applications due to various services. On the other hand, due to rapid growth, resource constraints and final costs, Cloud computing faces with several challenges such as load balancing. The purpose of load balancing is management of the load distribution among the processing nodes in order to have the best usage of resources while having minimum response time for the users’ requests. Several methods for load balancing in Cloud computing have been proposed in the literature. The shuffled frog leaping algorithm for load balancing is a dynamic, evolutionary, and inspired by nature. This paper proposed a modified rapid shuffled frog leaping algorithm (R-SFLA) that converge the defective evolution of frogs rapidly. In order to evaluate the performance of R-SFLA, it is compared to Shuffled Frog Leaping Algorithm (SFLA) and Augmented Shuffled Frog Leaping Algorithm (ASFLA) by the overall execution cost, Makespan, response time, and degree of imbalance. The simulation is performed in CloudSim, and the results obtained from the experiments indicated that the proposed algorithm acts more efficient compared to other methods based on the above mentioned factors. Manuscript profile
    • Open Access Article

      94 - Evaluation of Interpolation Methods for Estimating the Fading Channels in Digital TV Broadcasting
      Ali Pouladsadeh Mohammadali Sebghati
      Variations in telecommunication channels is a challenge of the wireless communication which makes the channel estimation and equalization a noteworthy issue. In OFDM systems, some subcarriers can be considered as pilots for channel estimation. In the pilot-aided channel More
      Variations in telecommunication channels is a challenge of the wireless communication which makes the channel estimation and equalization a noteworthy issue. In OFDM systems, some subcarriers can be considered as pilots for channel estimation. In the pilot-aided channel estimation procedure, interpolation is an essential step to achieve channel response in data subcarriers. Choosing the best interpolation method has been the subject of various researches, because there is no interpolator as the best method in all conditions, and their performance depends on the fading model, signal-to-noise ratio and pilot overhead ratio. In this paper, the effect of different interpolation methods on the quality of DVB-T2 broadcast links is evaluated. A simulation platform is prepared in which different channel models are defined according to the real-world measurements. The interpolation is performed by five widely-used methods (nearest neighbor, linear, cubic, spline, and Makima) for different pilot ratios. After channel equalization by the results of the interpolator, the bit error rate is calculated as the main criterion for evaluation and comparison. The rules of selecting the appropriate interpolator in different conditions is presented. It is generally concluded that for fading scenarios close to flat fading or high pilot overhead ratio, the simple interpolators such as linear interpolator are proper choices. But in harsh conditions, i.e. severe frequency-selective fading channels or low pilot overhead ratio, the more complicated interpolators such as cubic and spline methods yield better results. The amount of improvements and differences are quantified in this study. Manuscript profile
    • Open Access Article

      95 - A framework for architecture the electronic trust in e-commerce:online shopping segment
      amir Mohtarami Akbar amini
      Today, e-commerce is rapidly expanding as a way of doing business in the modern world due to its advantages and benefits. The purpose of this study is to extract dimensions and criteria for providing electronic trust arrangements in B2C services, improving the internal More
      Today, e-commerce is rapidly expanding as a way of doing business in the modern world due to its advantages and benefits. The purpose of this study is to extract dimensions and criteria for providing electronic trust arrangements in B2C services, improving the internal processes of the business environment, and also determining the importance and priority of each criterion to ensure electronic trust in order to gain the trust and satisfaction of the customer. A mixed method of research is employed includes: litrature review, field study and opinion gathering alongside of statistical techniques. The statistical population includes all expert customers of online stores in the city of Tehran, among which random sampling has been done. Questions in the context of the electronic trust and provision of e-business services and their priority in relation to each other, are discussed through inferential statistics. The results of the data analysis show that there is a meaningful relationship between the 12 criteria identified and customer's trust. The results obtained in the context of the conceptual framework show the impact of three dimensions of psychological, technical and legal, according to the criteria and indicators of electronic trust Manuscript profile
    • Open Access Article

      96 - Improving IoT resource management using fog calculations and ant lion optimization algorithm
      payam shams Seyedeh Leili Mirtaheri reza shahbazian ehsan arianyan
      In this paper, a model based on meta-heuristic algorithms for optimal allocation of IoT resources based on fog calculations is proposed. In the proposed model, the user request is first given to the system as a workflow; For each request, the resource requirements (proc More
      In this paper, a model based on meta-heuristic algorithms for optimal allocation of IoT resources based on fog calculations is proposed. In the proposed model, the user request is first given to the system as a workflow; For each request, the resource requirements (processing power, storage memory, and bandwidth) are first extracted. This component determines the requested traffic status of the application in terms of real-time. If the application is not real-time and is somewhat resistant to latency, the request will be referred to the cloud environment, but if the application needs to respond promptly and is sensitive to latency, it will be dealt with as a fog calculation. It will be written to one of the Cloudletes. In this step, in order to select the best solution in allocating resources to serve the users of the IoT environment, the ant milk optimization algorithm was used. The proposed method is simulated in MATLAB software environment and to evaluate its performance, five indicators of fog cells energy consumption, response time, fog cell imbalance, latency and bandwidth have been used. The results show that the proposed method reduces the energy consumption, latency rate in fog cells, bandwidth consumption rate, load balance rate and response time compared to the base design (ROUTER) 22, 18, 12, 22 and 47, respectively. Percentage has improved. Manuscript profile
    • Open Access Article

      97 - The effect of emotional intelligence of project managers on the effectiveness of team communication in Iranian research institutes (Case study: Research Institute of Communication and Information Technology)
      Mansoureh  Mohammadnezhad Fadard Ehram Safari
      Generally, in performing technical projects, especially in the field of information and communication technology, the most important criterion for handing over the project is having technical capabilities, and less attention is paid to the communication skills of projec More
      Generally, in performing technical projects, especially in the field of information and communication technology, the most important criterion for handing over the project is having technical capabilities, and less attention is paid to the communication skills of project managers, such as having emotional intelligence. Lack of attention to this issue seems to reduce the effectiveness of team communication and thus lead to project failure. The aim of this study was to measure the effect of emotional intelligence of project managers on the effectiveness of team communication in the projects of the Institute of Communication and Information Technology. The method of the present research is descriptive-analytical of correlation type, the statistical population of which consists of project managers and members of the project teams of the Research Institute of Communication and Information Technology. The statistical population includes 19 project teams that have been selected by census method. Data collection tools are Bar-On Emotional Intelligence Questionnaire and Senior Questionnaire to evaluate the effectiveness of project team communication. Pearson correlation coefficient, multivariate regression and imaginary variable regression and dependent t-test were used to analyze the data. The results show that the emotional intelligence of project managers affects effective communication in the project team. However, only interpersonal skills, interpersonal skills, and adaptability can predict effective communication within the project team, and the dimensions of general mood and stress management do not affect these relationships Manuscript profile
    • Open Access Article

      98 - Energy procurement of a cellular base station in independent microgrids with electric vehicles and renewable energy sources: Mixed-integer nonlinear programming model
      Reza Bahri saeed zeynali
      The cellular base stations are communication devices that ensure the connection in the world. Nevertheless, they are usually installed in remote places. This paper, studied the energy procurement of a cellular base stations in an independent microgrid with a hydrogen-ba More
      The cellular base stations are communication devices that ensure the connection in the world. Nevertheless, they are usually installed in remote places. This paper, studied the energy procurement of a cellular base stations in an independent microgrid with a hydrogen-based energy storage system, photovoltaic (PV) system, electric vehicles and a diesel generator. A new mixed-integer nonlinear programming model was used to deal with nonlinearities of the system components. The paper studied different uncertainties, such as the connection rate in cellular base stations, the driver of the electric vehicle, and PV generation, using stochastic programming method. The potency of the proposed method was studied in different case studies. The results prove that smart electric vehicle chargers reduce the risks and also cost/emission objective functions. The usage of this model can reduce the emissions as much as 18.60%. Manuscript profile
    • Open Access Article

      99 - BIG DATA
      Behshid Behkamal
      The main purpose of linked data is to realize the semantic web and extract knowledge through linking the data available on the web. One of the obstacles to achieving this goal is the existence of problems and errors in the published data, which causes incorrect links an More
      The main purpose of linked data is to realize the semantic web and extract knowledge through linking the data available on the web. One of the obstacles to achieving this goal is the existence of problems and errors in the published data, which causes incorrect links and as a result, invalid conclusions. Considering that the quality of the data has a direct effect on the success of the linked data project and the realization of the semantic web, it is better to evaluate the quality of each of the data sets in the early stages of publication. In this paper, a learning-based method for evaluating linked datasets is presented. For this purpose, first, the base quality model is selected and the quality features of the model are mapped to the field under study (which is the field of linked data in this article). Then, based on the mapping done, the important qualitative features in the study area are identified and described in detail by defining sub-features. In the third stage, based on past studies, the measurement metrics of each of the sub-features are extracted or defined. Then, measurement metrics should be implemented based on the type of data in the studied domain. In the next step, by selecting several data sets, the metric values ​​are automatically calculated on the tested data sets. To use observational learning methods, it is necessary to evaluate the quality of data experimentally by experts. At this stage, the accuracy of each of the data sets is evaluated by experts, and based on the correlation study tests, the relationship between the quantitative values ​​of the proposed metrics and the accuracy of the data is investigated. Then, by using learning methods, the effective metrics in the accuracy evaluation that have an acceptable predictability are identified. In the end, using learning methods, a quality prediction model based on the proposed criteria is presented. The results of the evaluations showed that the proposed method is scalable, efficient and applicable in addition to being automatic. Manuscript profile
    • Open Access Article

      100 - Providing a New Solution in Selecting Suitable Databases for Storing Big Data in the National Information Network
      Mohammad Reza Ahmadi davood maleki ehsan arianyan
      The development of infrastructure and applications, especially public services in the form of cloud computing, traditional models of database services and their storage methods have faced sever limitations and challenges. The increasing development of data service produ More
      The development of infrastructure and applications, especially public services in the form of cloud computing, traditional models of database services and their storage methods have faced sever limitations and challenges. The increasing development of data service productive tools and the need to store the results of large-scale processing resulting from various activities in the national network of information and data produced by the private sector and pervasive social networks has made the process of migrating to new databases with appropriate features inevitable. With the expansion and change in the size and composition of data and the formation of big data, traditional practices and patterns do not meet new needs. Therefore, it is necessary to use data storage systems in new and scalable formats and models. This paper reviews the essential solution regarding the structural dimensions and different functions of traditional databases and modern storage systems and technical solutions for migrating from traditional databases to modern ones suitable for big data. Also, the basic features regarding the connection of traditional and modern databases for storing and processing data obtained from the national information network are presented and the parameters and capabilities of databases in the standard platform context and Hadoop context are examined. As a practical example, a combination of traditional and modern databases using the balanced scorecard method is presented as well as evaluated and compared. Manuscript profile
    • Open Access Article

      101 - Community-Based Multi-Criteria Placement of Applications in the Fog Environment
      Masomeh Azimzadeh Ali Rezaee Somayyeh  Jafarali Jassbi MohammadMahdi Esnaashari
      Fog computing technology has emerged to respond to the need for modern IoT applications for low latency, high security, etc. On the other hand, the limitations of fog computing such as heterogeneity, distribution, and resource constraints make service management in this More
      Fog computing technology has emerged to respond to the need for modern IoT applications for low latency, high security, etc. On the other hand, the limitations of fog computing such as heterogeneity, distribution, and resource constraints make service management in this environment challenging. Intelligent service placement means placing application services on fog nodes to ensure their QoS and effective use of resources. Using communities to organize nodes for service placement is one of the approaches in this area, where communities are mainly created based on the connection density of nodes, and applications are placed based on a single-criteria prioritization approach. This leads to the creation of unbalanced communities and inefficient placement of applications. This paper presents a priority-based method for deploying applications in the fog environment. To this end, balanced communities are created and applications are placed in balanced communities based on a multi-criteria prioritization approach. This leads to optimal use of network capacities and increases in QoS. The simulation results show that the proposed method improves deadline by up to 22%, increases availability by about 12%, and increases resource utilization by up to 10%. Manuscript profile
    • Open Access Article

      102 - Identifying and Ranking Factors Affecting the Digital Transformation Strategy in Iran's Road Freight Transportation Industry Focusing on the Internet of Things and Data Analytics
      Mehran Ehteshami Mohammad Hasan Cheraghali Bita Tabrizian Maryam Teimourian sefidehkhan
      This research has been done with the aim of identifying and ranking the factors affecting the digital transformation strategy in Iran's road freight transportation industry, focusing on the Internet of Things and data analytics. After reviewing the literature, semi-stru More
      This research has been done with the aim of identifying and ranking the factors affecting the digital transformation strategy in Iran's road freight transportation industry, focusing on the Internet of Things and data analytics. After reviewing the literature, semi-structured interviews were conducted with 20 academic and road freight transportation industry experts in Iran, who were selected using the purposive sampling method and saturation principle. In the quantitative part, the opinions of 170 employees of this industry, who were selected based on Cochran's formula and stratified sampling method, were collected using a researcher-made questionnaire. Delphi technique, literature review and coding were used to analyze the data in the qualitative part. In the quantitative part, inferential statistics and SPSS and smartPLS software were used. Finally, 40 indicators were extracted in the form of 8 factors and ranking of indicators and affecting factors was done using factor analysis. The result of this research shows that the internal factors have the highest rank and software infrastructure, hardware infrastructure, economic, external factors, legal, cultural and penetration factor are in the next ranks respectively. Therefore, it is suggested that organizations consider their human resource empowerment program in line with the use of technology and digital tools. Manuscript profile
    • Open Access Article

      103 - A Framework for Sentiment Analysis in Social Networks based on Interpreting Contents
      Maryam Tayfeh-Mahmoudi َAmirmansour  Yadegari Parvin Ahmadi kambiz badie
      Interpreting contents with the aim of analyzing the sentiment of their narrators in social networks, holds a high significance due to the role of a content in disseminating information to the corresponding human groups. In this paper, we propose a framework for analyzin More
      Interpreting contents with the aim of analyzing the sentiment of their narrators in social networks, holds a high significance due to the role of a content in disseminating information to the corresponding human groups. In this paper, we propose a framework for analyzing sentiment on complex contents in a social network according to which a set of if-then type rules defined at high abstraction level, would be able to classify the messages behind these contents. According to this framework, items such as prosodic, context and key propositions are considered in the condition part of a rule and possible classes of message are taken into account in a rule’s action part. It is to be noted that the rules proposed for interpreting a content do not depend on the considered language due to the very inherent property of the items which are considered in interpretation. Results of experiments on a wide range of different contents in a social network support the fact that the proposed framework is sufficiently capable of analyzing the sentiments of contents’ narrators. Manuscript profile
    • Open Access Article

      104 - Anomaly and Intrusion Detection Through Data Mining and Feature Selection using PSO Algorithm
      Fereidoon Rezaei Mohamad Ali Afshar Kazemi Mohammad Ali Keramati
      Today, considering technology development, increased use of Internet in businesses, and movement of business types from physical to virtual and internet, attacks and anomalies have also changed from physical to virtual. That is, instead of thieving a store or market, th More
      Today, considering technology development, increased use of Internet in businesses, and movement of business types from physical to virtual and internet, attacks and anomalies have also changed from physical to virtual. That is, instead of thieving a store or market, the individuals intrude the websites and virtual markets through cyberattacks and disrupt them. Detection of attacks and anomalies is one of the new challenges in promoting e-commerce technologies. Detecting anomalies of a network and the process of detecting destructive activities in e-commerce can be executed by analyzing the behavior of network traffic. Data mining systems/techniques are used extensively in intrusion detection systems (IDS) in order to detect anomalies. Reducing the size/dimensions of features plays an important role in intrusion detection since detecting anomalies, which are features of network traffic with high dimensions, is a time-consuming process. Choosing suitable and accurate features influences the speed of the proposed task/work analysis, resulting in an improved speed of detection. In this article, by using data mining algorithms such as Bayesian, Multilayer Perceptron, CFS, Best First, J48 and PSO, we were able to increase the accuracy of detecting anomalies and attacks to 0.996 and the error rate to 0.004. Manuscript profile
    • Open Access Article

      105 - Intrusion Detection Based on Cooperation on the Permissioned Blockchain Platform in the Internet of Things Using Machine Learning
      Mohammad Mahdi  Abdian majid ghayori Seyed Ahmad  Eftekhari
      Intrusion detection systems seek to realize several objectives, such as increasing the true detection rate, reducing the detection time, reducing the computational load, and preserving the resulting logs in such a way that they cannot be manipulated or deleted by unauth More
      Intrusion detection systems seek to realize several objectives, such as increasing the true detection rate, reducing the detection time, reducing the computational load, and preserving the resulting logs in such a way that they cannot be manipulated or deleted by unauthorized people. Therefore, this study seeks to solve the challenges by benefiting from the advantages of blockchain technology, its durability, and relying on IDS architecture based on multi-node cooperation. The proposed model is an intrusion detection engine based on the decision tree algorithm implemented in the nodes of the architecture. The architecture consists of several connected nodes on the blockchain platform. The resulting model and logs are stored on the blockchain platform and cannot be manipulated. In addition to the benefits of using blockchain, reduced occupied memory, the speed, and time of transactions are also improved by blockchain. In this research, several evaluation models have been designed for single-node and multi-node architectures on the blockchain platform. Finally, proof of architecture, possible threats to architecture, and defensive ways are explained. The most important advantages of the proposed scheme are the elimination of the single point of failure, maintaining trust between nodes, and ensuring the integrity of the model, and discovered logs. Manuscript profile
    • Open Access Article

      106 - Improving Resource Allocation in Mobile Edge Computing Using Particle Swarm and Gray Wolf Optimization Algorithms
      seyed ebrahim dashti saeid shabooei
      Mobile edge computing improves the experience of end users to achieve appropriate services and service quality. In this paper, the problem of improving resource allocation, when offloading tasks, based on mobile devices to edge servers in computing systems is investigat More
      Mobile edge computing improves the experience of end users to achieve appropriate services and service quality. In this paper, the problem of improving resource allocation, when offloading tasks, based on mobile devices to edge servers in computing systems is investigated. Some tasks are uploaded and processed locally and some to edge servers. The main issue is that the offloaded tasks for virtual machines in computing networks are properly scheduled to minimize computing time, service cost, computing network waste, and the maximum connection of a task with the network. In this paper, a multi-objective hybrid algorithm of particle swarm and gray wolf was introduced to manage resource allocation and task scheduling to achieve an optimal result in edge computing networks. Local search in the particle swarm algorithm has good results in the problem, but it will cause the loss of global optima, so in this problem, in order to improve the model, the gray wolf algorithm was used as the main basis of the proposed algorithm, in the wolf algorithm Gray, due to the graphical approach to the problem, the set of global searches will reach the optimal solution, so by combining these functions, we tried to improve the operational conditions of the two algorithms for the desired goals of the problem. In order to create a network in this research, the network creation parameters in the basic article were used and the LCG data set was used in the simulation. The simulation environment in this research is the sim cloud environment. The comparison results show the improvement of waiting time and cost in the proposed approach. The results show that, on average, the proposed model has performed better by reducing the work time by 10% and increasing the use of resources by 16%. Manuscript profile
    • Open Access Article

      107 - A Survey on Computer Security Patterns and Proposing a New Perspective
      Hadi sadjadi Reza Kalantari
      In this article, at the beginning, the use of computer security models and its benefits are discussed in a new way. Then, while briefly introducing the space of computer security encounters in the form of ontology, three perspectives in the study of patterns in this fie More
      In this article, at the beginning, the use of computer security models and its benefits are discussed in a new way. Then, while briefly introducing the space of computer security encounters in the form of ontology, three perspectives in the study of patterns in this field have been identified and distinguished from each other. These three perspectives are secure models, security models, and the framework and system to security models. The first and last perspectives are briefly explained and the second perspective is studied in detail from the perspective of the organization of patterns, including the five types of organization. The five types mentioned include software-based lifecycle organization, logical-level organization-based organization, threat-based classification-based organization, attack-based classification-based organization, and application-based organization. In this type of introduction of patterns, the audience acquires a comprehensive view of the discourse of computer security patterns and acquires the necessary knowledge to make better use of these patterns. Finally, the analysis and idea of this research are presented in the form of introducing a new type of organization in order to facilitate the proper use and addressing of patterns. It is stated that the existing categories are mostly static and forward-looking and do not have the necessary dynamism and backwardness, and the idea of covering all stakeholders and security ontology can have this feature and, include agile patterns as well. Based on this idea and related analyzes, the atmosphere of future research activities will be revealed to the audience. Manuscript profile
    • Open Access Article

      108 - Identifying the Key Drivers of Digital Signature Implementation in Iran (Using Fuzzy Delphi Method)
      Ghorbanali Mehrabani Fatemeh Zargaran khouzani
      Despite the emphasis of researchers and experts on the need to implement digital signatures and the progress of technology towards the digitization of all affairs and electronic governance, Iran is still facing the challenge of implementing digital signatures. The purpo More
      Despite the emphasis of researchers and experts on the need to implement digital signatures and the progress of technology towards the digitization of all affairs and electronic governance, Iran is still facing the challenge of implementing digital signatures. The purpose of this article is to identify and analyze the key drivers of digital signature implementation in Iran with a fuzzy Delphi approach. In terms of practical purpose and in terms of information gathering, the research has benefited from a hybrid approach. The statistical community consists of all experts and specialists in the field of information technology and digital signature and articles in this field. The sample size of the statistical community of experts is 13 people who were selected by the purposeful sampling method. 31 articles were selected based on their availability and downloadable, non-technical nature, and relevance to the topic. The method of data analysis was done according to the fuzzy Delphi approach. Validity and reliability were calculated and confirmed using the CVR index and Cohen's kappa test with coefficients of 0.83 and 0.93, respectively. The results prove that the key drivers of digital signature implementation in Iran include 5 main dimensions and 30 concepts, which are 1) security (information confidentiality, information security, sender authentication, document authentication, privacy protection, trust between parties), 2) business (digital business models, communication needs, staff management, organization size, organizational structure, organization resources, organizational culture, top managers, competition ecosystem, e-governance), 3) user (perceived convenience, perceived benefit, consumer behavior, consumer literacy, consumer lifestyle), 4) technical (development of technical infrastructure, systems integration, system complexity, system tanks, design quality, technical speed of certificate production and verification, impermeability of hackers) and 5) Legal (legal licenses, penal laws, legislative body, e-commerce laws). It is suggested that in the field of digital signature implementation, special attention should be paid to rewriting rules, training users, creating a security culture, and digital signature policymakers should invite knowledge-based companies to cooperate in developing infrastructure and making relevant software competitive. Manuscript profile
    • Open Access Article

      109 - Community Detection in Bipartite Networks Using HellRank Centrality Measure
      Ali Khosrozadeh movaghar movaghar Mohammad Mehdi Gilanian Sadeghi Hamidreza Mahyar
      Community structure is a common and important feature in many complex networks, including bipartite networks. In recent years, community detection has received attention in many fields and many methods have been proposed for this purpose, but the heavy consumption of ti More
      Community structure is a common and important feature in many complex networks, including bipartite networks. In recent years, community detection has received attention in many fields and many methods have been proposed for this purpose, but the heavy consumption of time in some methods limits their use in large-scale networks. There are methods with lower time complexity, but they are mostly non-deterministic, which greatly reduces their applicability in the real world. The usual approach that is adopted to community detection in bipartite networks is to first construct a unipartite projection of the network and then communities detect in that projection using methods related to unipartite networks, but these projections inherently lose information. In this paper, based on the bipartite modularity measure that quantifies the strength of partitions in bipartite networks and using the HellRank centrality measure, a quick and deterministic method for community detection from bipartite networks directly and without need to projection, proposed. The proposed method is inspired by the voting process in election activities in the social society and simulates it. Manuscript profile
    • Open Access Article

      110 - Generalizing The Concept of Business Processes Structural Soundness from Classic Petri-nets to BPMN2.0 Process Models
      Yahya Poursoltani Mohammad Hassan Shirali-Shahreza S. Alireza hashemi G.
      BPMN2.0 Standard is a modeling language, which can be understood and used by a wide range of users. However, because of its non-formal nature, models (designed using it) can be containing structural errors such as Deadlock (impossibility of executing some of process tas More
      BPMN2.0 Standard is a modeling language, which can be understood and used by a wide range of users. However, because of its non-formal nature, models (designed using it) can be containing structural errors such as Deadlock (impossibility of executing some of process tasks) and Livelock (infinite repetition of tasks) may be produced by using them. These semantic errors can create anomalies in the workflow of the organization. So far, some researches has been conducted on the validation of these process models and various solutions have been provided to discover some of these structural errors. The question that may be raised about these methods is whether it is possible to definitely guarantee the structural accuracy of a BPMN method model by using any of them? To answer this question, we need a comprehensive definition of a correct BPMN2.0 process model, based on which we can evaluate the comprehensiveness of validation methods and strongly make sure that the considered method can discover all of the structural errors of the process model. In this paper, based on concept of general process models and the concept of soundness (based on process models created using Petri nets) and the generalization of its properties, i.e. Liveness and Boundness to BPMN2.0 process models, a comprehensive definition for a correct (sound) BPMN2 process model provided. Then, the comprehensiveness of the suggested methods of some of the most important researches conducted has been evaluated based on it. This definition can be used as a measure for efficiency of BPMN validation methods. Manuscript profile
    • Open Access Article

      111 - Presenting the ICT Policies Implementation Model of the 6th Development Using the Neural Network Method
      Nazila Mohammadi Gholamreza  Memarzadeh sedigheh tootian
      It is inevitable to properly manage the implementation of information and communication technology policies in a planned way in order to improve the country's position in the fields of science and technology. The purpose of this research is to provide a model of the eff More
      It is inevitable to properly manage the implementation of information and communication technology policies in a planned way in order to improve the country's position in the fields of science and technology. The purpose of this research is to provide a model of the effective factors on the implementation of Iran's ICT policies by the neural network technique and based on Giddens' constructive theory. From the point of view of conducting it, this research is of a survey type and based on the purpose, it is of an applied type because it is trying to use the results of the research in the Ministry of Communication and Information Technology and the Iranian Telecommunications Company. Data collection is based on library and field method. The tool for collecting information is researcher-made questionnaire. The statistical population of the research is ICT experts at the headquarters of Iran Telecommunication Company (810 people), of which 260 people were randomly selected as a sample based on Cochran's formula. MATLAB software was used for data analysis. According to the findings, the best combination for development is when all input variables are considered at the same time, and the worst case is when the infrastructure development variable is ignored, and the most important based on network sensitivity analysis is related to infrastructure development and the least important is related to content supply. Manuscript profile
    • Open Access Article

      112 - Regression Test Time Reduction in Test Driven Development
      Zohreh Mafi mirian mirian
      Test-Driven Development (TDD) is one of the test-first software production methods in which the production of each component of the code begins with writing the test case. This method has been noticed due to many advantages, including the readable, regular, and short co More
      Test-Driven Development (TDD) is one of the test-first software production methods in which the production of each component of the code begins with writing the test case. This method has been noticed due to many advantages, including the readable, regular, and short code, as well as increasing quality, productivity, and reliability. The large number of unit test cases produced in this method is considered as an advantage (increases the reliability of the code), however, the repeated execution of test cases increases the regression test time. The purpose of this article is to present an algorithm for selecting test cases to reduce the time of the regression test in the TDD method. So far, various ideas have been proposed to select test cases. Most of these ideas are based on programming language and software production methods. The idea presented in this article is based on the program difference method and the nature of the TDD method, also a tool is written as a plugin in Java Eclipse. The provided tool consists of five main components: 1) Version Manager, 2) Code Segmentation, 3) Code Change Detection (in each version compared to the previous version), 4) Semantic Connection Creation (between unit tests and code blocks), and finally 5) Test Cases Selection. Manuscript profile
    • Open Access Article

      113 - Survey on the Applications of the Graph Theory in the Information Retrieval
      Maryam Piroozmand Amir Hosein Keyhanipour Ali Moeini
      Due to its power in modeling complex relations between entities, graph theory has been widely used in dealing with real-world problems. On the other hand, information retrieval has emerged as one of the major problems in the area of algorithms and computation. As graph- More
      Due to its power in modeling complex relations between entities, graph theory has been widely used in dealing with real-world problems. On the other hand, information retrieval has emerged as one of the major problems in the area of algorithms and computation. As graph-based information retrieval algorithms have shown to be efficient and effective, this paper aims to provide an analytical review of these algorithms and propose a categorization of them. Briefly speaking, graph-based information retrieval algorithms might be divided into three major classes: the first category includes those algorithms which use a graph representation of the corresponding dataset within the information retrieval process. The second category contains semantic retrieval algorithms which utilize the graph theory. The third category is associated with the application of the graph theory in the learning to rank problem. The set of reviewed research works is analyzed based on both the frequency as well as the publication time. As an interesting finding of this review is that the third category is a relatively hot research topic in which a limited number of recent research works are conducted. Manuscript profile
    • Open Access Article

      114 - Application Identification Through Intelligent Traffic Classification
      Shaghayegh Naderi
      Traffic classification and analysis is one of the big challenges in the field of data mining and machine learning, which plays an important role in providing security, quality assurance and network management. Today, a large amount of transmission traffic in the network More
      Traffic classification and analysis is one of the big challenges in the field of data mining and machine learning, which plays an important role in providing security, quality assurance and network management. Today, a large amount of transmission traffic in the network is encrypted by secure communication protocols such as HTTPS. Encrypted traffic reduces the possibility of monitoring and detecting suspicious and malicious traffic in communication infrastructures (instead of increased security and privacy of the user) and its classification is a difficult task without decoding network communications, because the payload information is lost, and only the header information (which is encrypted too in new versions of network communication protocols such as TLS1.03) is accessible. Therefore, the old approaches of traffic analysis, such as various methods based on port and payload, have lost their efficiency, and new approaches based on artificial intelligence and machine learning are used in cryptographic traffic analysis. In this article, after reviewing the traffic analysis methods, an operational architectural framework for intelligent traffic analysis and classification has been designed. Then, an intelligent model for Traffic Classification and Application Identification is presented and evaluated using machine learning methods on Kaggle141. The obtained results show that the random forest model, in addition to high interpretability compared to deep learning methods, has been able to provide high accuracy in traffic classification (95% and 97%) compared to other machine learning methods. Finally, tips and suggestions about using machine learning methods in the operational field of traffic classification have been provided. Manuscript profile