• Home
  • Subject Areas
  • List of articles (by subject) Information and Knowledge Technology

List of articles (by subject) Information and Knowledge Technology


    • Open Access Article

      1 - Document Clustering Based On Ontology and Fuzzy Approach
      Maryam Amiri hasan khatan Lo
      Data mining, also known as knowledge discovery in database, is the process to discover unknown knowledge from a large amount of data. Text mining is to apply data mining techniques to extract knowledge from unstructured text. Text clustering is one of important techniqu More
      Data mining, also known as knowledge discovery in database, is the process to discover unknown knowledge from a large amount of data. Text mining is to apply data mining techniques to extract knowledge from unstructured text. Text clustering is one of important techniques of text mining, which is the unsupervised classification of similar documents into different groups. The most important steps in document clustering are how documents are represented and the measurement of similarities between them. By giving a new ontological representation and a similarity measure, this research focuses on improving the performance of text clustering. The text clustering algorithm has been investigated in three aspects: ontological representation of documents, documents similarity measure, fuzzy inference system to measuring the final similarities. Ultimately, the clustering is carried out by bottom-up hierarchical clustering. In the first step, documents are represented as ontological graph according to domain knowledge. In contrast to keywords method, this method is based on domain concepts and represents a document as subgraph of domain ontology. The extracted concepts of document are the graph nodes. Weight is measured for each node in terms of concept frequency. The relation between documents’ concepts specifies the graph edges and the scope of the concepts’ relation determines the edge’s weight. In the second step, a new similarity measure has been presented proportional to the ontological representation. For each document, main and detailed concepts and main edges are determined. The similarity of each couple of documents is computed in three amounts and according to these three factors. In the third step, the fuzzy inference system with three inputs and one output has been designed. Inputs are the similarities of main concepts, detailed concepts and the main edges of two documents and the output is final similarities of the two documents. In final step, a bottom-up hierarchical clustering algorithm is used to clustering the documents according to final similarity matrix. In order to evaluate, the offered method has been compared with the results of Naïve Bayes method and ontology based algorithms. The results indicate that the proposed method improves the precision, recall, F-measure and accuracy and produces more meaningful results. Manuscript profile
    • Open Access Article

      2 - An Investigation into the Provider Factors of Knowledge Management in the Kermanshah province Communication Company
      hussein moomivand
      Abstract The required factors have been investigated to apply knowledge management by carrying out the field study using staff opinions of Kermanshah Telecommunication Company in this research. After indicating the required factors of applying knowledge management, rela More
      Abstract The required factors have been investigated to apply knowledge management by carrying out the field study using staff opinions of Kermanshah Telecommunication Company in this research. After indicating the required factors of applying knowledge management, relationship between each factor of knowledge process system, organizational culture and information technology system with the knowledge management have been analyzed statistically by the obtained results using SPSS program. Kermanshah Telecommunication Company has a population of 370 staff that a sample of 77 staff has been used. It has been found that there are acceptable correlation between knowledge process system, organizational culture and information technology system with the knowledge management which correlation coefficient (R) of each relationship is 0.854, 0.915 and 0.812 respectively. There are some different between the effects of knowledge process system, organizational culture and information technology system on the applying knowledge management as the effects knowledge process system, information technology system and organizational culture are in the fist, second and third step respectively. Manuscript profile
    • Open Access Article

      3 - طراحی اولین پایگاه داده کلمات دستنویس کردی برای سیستم های تشخیص تصویری کلمات
      fatemeh daneshfar basir alagheband vahid sharafi
      چکیده: یکی از اجزای زیربنایی سیستم های تشخیص تصویری کلمات پایگاه داده هاست. هر سیستمی که در این زمینه طراحی گردد لاجرم می بایست از یک نوع پایگاه داده ها استفاده کند. بدیهی است چون موضوع مورد مطالعه در این سیستم ها شکل نوشتاری زبان های مختلف میباشد پس برای هر زبان مشخص More
      چکیده: یکی از اجزای زیربنایی سیستم های تشخیص تصویری کلمات پایگاه داده هاست. هر سیستمی که در این زمینه طراحی گردد لاجرم می بایست از یک نوع پایگاه داده ها استفاده کند. بدیهی است چون موضوع مورد مطالعه در این سیستم ها شکل نوشتاری زبان های مختلف میباشد پس برای هر زبان مشخص پایگاه داده بخصوصی لازم است. زبانی که این مقاله بر آن متمرکز شده کردی است و در این مقاله مراحل مختلف چگونگی طراحی اولین پایگاه داده دستنویس برای زبان کردی شرح داده شده است. از آنجا که تاکنون هیچ پایگاه داده ای مخصوص تشخیص تصویری کلمات، مربوط به زبان کردی طراحی نشده است بنابراین زمینه ای بکر و مستعد برای انجام تحقیق محسوب می گردد. همچنین با توجه به اینکه زبان کردی دارای دو رسم الخط مختلف لاتین و آرامی می باشد در این مقاله منحصرا به رسم الخط آرامی البته از نوع دستنویس آن پرداخته شده است. Manuscript profile
    • Open Access Article

      4 - Model to improve banking by Using customer knowledge management and Mobile Banking and Its Impact on Customer Loyalty
      narges rezaei malek
      With progress of communication technology in the past two decades, use of ICT has become a success factor in the world of competition. To increase competition in the banking industry of Iran, many banks are willing to offer a unique service to their customers. Investmen More
      With progress of communication technology in the past two decades, use of ICT has become a success factor in the world of competition. To increase competition in the banking industry of Iran, many banks are willing to offer a unique service to their customers. Investment of private banks has concentrated on the presenting special services. That has led to more successful in attracting and retaining customers. To evaluate the influencing factors on banking services improvement, information technology and knowledge management as key factors were identified. This paper is to identify the most effective indicators to assess the impact of knowledge management and information technology for improving customer service of bank on customer satisfaction.180 experts of Bank Mellat is selected as population. Sample size was calculated by using a random sampling method. 130 samples were obtained but 120 questionnaires Were returned. Data were collected through questionnaires. In order to analyze the data from the structural equation modeling and LISREL software was used. In order to ensure the accuracy of the results and sensitivity analysis, regression analysis were applied. Finally, the results show that the customer knowledge management and mobile banking have positive effect on performance of banking service, and improving this kind of services have positive effects on customer satisfaction. Manuscript profile
    • Open Access Article

      5 - A method for clustering customers using RFM model and grey numbers in terms of uncertainty
      azime mozafari
      The purpose of this study is presentation a method for clustering bank customers based on RFM model in terms of uncertainty. According to the proposed framework in this study after determination the parameter values of the RFM model, including recently exchange (R), fre More
      The purpose of this study is presentation a method for clustering bank customers based on RFM model in terms of uncertainty. According to the proposed framework in this study after determination the parameter values of the RFM model, including recently exchange (R), frequency exchange (F), and monetary value of the exchange (M), grey theory is used to eliminate the uncertainty and customers are segmented using a different approach. Thus, bank customers are clustered to three main segments called good, ordinary and bad customers. After cluster validation using Dunn index and Davis Bouldin index, properties of customers are detected in any of the segments. Finally, recommendations are offered to improve customer relationship management system. Manuscript profile
    • Open Access Article

      6 - A New Method to computational intelligence to improve network lifetime in wireless sensor networks
      faezeh talebian hassan khotanloo mansour esmaeilpour
      Recent advances in wireless electronic and communications provide us the ability to build small, economical sensors with low power consumption and many diverse applications. Limited energy capacity of sensors is a huge challenge that will affect these networks. Clusteri More
      Recent advances in wireless electronic and communications provide us the ability to build small, economical sensors with low power consumption and many diverse applications. Limited energy capacity of sensors is a huge challenge that will affect these networks. Clustering has been used as a well-known method to handle this challenge. To find appropriate location of clusters' heads, imperialist competitive algorithm as an emerging topic in computational intelligence has been used. Clusters' heads are connected in a three-level model so that cluster heads with low energy capacity and far from station are considered as level three indirectly communicating with base station. This eventually increases lifetime of wireless sensor networks. Manuscript profile
    • Open Access Article

      7 - Present an Initial Estimation Method for Logical Transaction-based Software Projects
      mehrdad shahsavari
      The first and most basic requirement for successful entry of a project, is have a realistic and reasonable estimation. In this paper, in order to increase accuracy of software projects estimation and reduce complexity of estimation process, we introduce a method called More
      The first and most basic requirement for successful entry of a project, is have a realistic and reasonable estimation. In this paper, in order to increase accuracy of software projects estimation and reduce complexity of estimation process, we introduce a method called the "Logical Transaction Point (LTP)". Our method is most appropriate for transactional software. By use of this method can estimate the total size of use-case's and size of the whole software. In this paper we will prove the more accurate the same technique as UCP method and Due to more transparency and simplicity, its deployment is easier. We provide the main basis for this method, the degree of functional point analysis (FPA) and estimation the degree of use case point (UCP). Manuscript profile
    • Open Access Article

      8 - A new algorithm based on ensemble learning for learning to rank in information retrieval
      Azadeh Shakery elham ghanbari
      Learning to rank refers to machine learning techniques for training a model in a ranking task. Learning to rank has been shown to be useful in many applications of information retrieval, natural language processing, and data mining. Learning to rank can be described by More
      Learning to rank refers to machine learning techniques for training a model in a ranking task. Learning to rank has been shown to be useful in many applications of information retrieval, natural language processing, and data mining. Learning to rank can be described by two systems: a learning system and a ranking system. The learning system takes training data as input and constructs a ranking model. The ranking system then makes use of the learned ranking model for ranking prediction. In this paper, a new learning algorithm based on ensemble learning for learning ranking models in information retrieval is proposed. This algorithm iteratively constructs weak learners using a fraction of the training data whose weight distribution is determined based on previous weak learners. The proposed algorithm combines the weak rankers to achieve the final ranking model. This algorithm constructs a ranking model on a fraction of the training data to increase the accuracy and reduce the learning time. Experimental results based on Letor.3 benchmark dataset shows that the proposed algorithm significantly outperforms other ensemble learning algorithms. Manuscript profile
    • Open Access Article

      9 - Key Management in Digital Right Management Systems for Offline Mode
      nafiseh shafiee mahdi shajari
      By expanding application of digital content in the world of information technology, supervision and control over the data, and also preventing the copy of documents is considered. In this relation digital rights management systems are responsible for the secure distribu More
      By expanding application of digital content in the world of information technology, supervision and control over the data, and also preventing the copy of documents is considered. In this relation digital rights management systems are responsible for the secure distribution of digital content, and for this purpose the common functions in the field of cryptography and utilize Digital watermarking have used. On the other hand, the key management is very important in digital rights management systems, and methods such as Digital watermarking methods are not effective in preventing copying files and digital documents. In this paper, we have proposed a model to improve the performance of digital rights management systems. This method is based on multi-part encryption and control on documents publication. The main distinction of this approach with other methods employed is using Obfuscation techniques to protect the key generation algorithm against reverse engineering. Manuscript profile
    • Open Access Article

      10 - A Review on Hadith Text Processing Tasks
      Sepideh Baradaran Behrooz Minaei Mohammad Ebrahim Shenassa Sayyed Ali Hossayni
      In order to facilitate and achieve higher precision and less processing time, it is recommended to evaluate the authenticity of hadith by intelligent methods. Due to the huge volume of narrative texts (hadith) and the complex concepts and relationships in them, many res More
      In order to facilitate and achieve higher precision and less processing time, it is recommended to evaluate the authenticity of hadith by intelligent methods. Due to the huge volume of narrative texts (hadith) and the complex concepts and relationships in them, many researches have been conducted in the field of automatic hadith processing. In this field, some researchers have evaluated intelligent methods in the fields of Matn (text) and Isnad processing, which according to the review of previous researches, about 47% of them in the field of hadith text processing and 46% in the case of Isnad processing of hadiths and 7% have done research in both fields. By examining 97 researches in the field of processing hadiths, it was found that hadiths were evaluated in the field of measuring the accuracy of the text or Isnad or both cases. Processing tasks can be classified into different categories such as ontology construction, hadith text classification, hadith similarities and hadith authentication. The most used hadith processing method has been the information retrieval method in the field of hadith text processing. Manuscript profile
    • Open Access Article

      11 - Survey on the Applications of the Graph Theory in the Information Retrieval
      Maryam Piroozmand Amir Hosein Keyhanipour Ali Moeini
      Due to its power in modeling complex relations between entities, graph theory has been widely used in dealing with real-world problems. On the other hand, information retrieval has emerged as one of the major problems in the area of algorithms and computation. As graph- More
      Due to its power in modeling complex relations between entities, graph theory has been widely used in dealing with real-world problems. On the other hand, information retrieval has emerged as one of the major problems in the area of algorithms and computation. As graph-based information retrieval algorithms have shown to be efficient and effective, this paper aims to provide an analytical review of these algorithms and propose a categorization of them. Briefly speaking, graph-based information retrieval algorithms might be divided into three major classes: the first category includes those algorithms which use a graph representation of the corresponding dataset within the information retrieval process. The second category contains semantic retrieval algorithms which utilize the graph theory. The third category is associated with the application of the graph theory in the learning to rank problem. The set of reviewed research works is analyzed based on both the frequency as well as the publication time. As an interesting finding of this review is that the third category is a relatively hot research topic in which a limited number of recent research works are conducted. Manuscript profile
    • Open Access Article

      12 - Explanation the role of standardization in the proposed solutions for privacy protection in health data
      batool mehrshad Mohammad mehraeen Mohammad Khansari saeed mortazavi
      Introduction: Due to the importance of data sharing in the digital era and the two main considerations related to it that are; standardization and privacy protection,this article aims to answer a critical question that is,does standardization play a role in the proposed More
      Introduction: Due to the importance of data sharing in the digital era and the two main considerations related to it that are; standardization and privacy protection,this article aims to answer a critical question that is,does standardization play a role in the proposed solutions for health data privacy protection? Methods: The present study is a systematic review conducted by searching databases such as Web of Science, PubMed, ScienceDirect, Springer, Magiran and SID and by applying a time limit filter.After applying the criteria for inclusion and exclusion and evaluating the results,relevant studies were selected. Findings: Articles addressing standardization and privacy protection in health data have been analyzed by taking 5 indicators into account. The need for standardization and its role to preserve privacy in health data have also been explained by examining the findings and discussing various laws related to privacy in the health field and its relationship with standardization.After the investigation,our study reveals that due to the technical structure of the fourth and fifth generation of health care, which has facilitated standardization, privacy protection can also be achieved through standardization.Finally,directions for future research on this topic are also suggested. Conclusion: The results of this research showed that the fourth- and fifth-generation health care systems that are technology-oriented; are formed based on standards,and these standards provide the possibility of their evaluation. Thus if laws related to health data privacy protection are developed based on standards,they will have a high execution guarantee. This further highlights the critical role of standard development organizations in this field. Manuscript profile