• Home
  • Deep Neural Network
  • OpenAccess
    • List of Articles Deep Neural Network

      • Open Access Article

        1 - Sentiment analysis for stock market predection with deep neural network: A case study for international corporate stock database
        hakimeh mansour Saeedeh Momtazi Kamran Layeghi
        Emotional analysis is used as one of the main pillars in various fields such as financial management, marketing and economic changes forecasting in different countries. In order to build an emotion analyzer based on users' opinions on social media, after extracting impo More
        Emotional analysis is used as one of the main pillars in various fields such as financial management, marketing and economic changes forecasting in different countries. In order to build an emotion analyzer based on users' opinions on social media, after extracting important features between words by convolutional layers, we use LSTM layers to establish the relationship behind the sequence of words and extract the important features of the text. With discovery of new features extracted by LSTM, the ability of the proposed model to classify the stock values of companies increases. This article is based on the data of Nguyen et al. (2015) and uses only the emotional information of people in social networks to predict stocks. Given that we categorize each user's message into one of the emotional classes "Strong Buy", "Buy", "Hold", "Sell", "Strong Sell", this model can predict the stock value of the next day, whether it will be high or low. The proposed structure consisted of 21 layers of neural networks consisting of convolutional neural networks and long short-term memory network. These networks were implemented to predict the stock markets of 18 companies. Although some of the previously presented models have used for emotion analysis to predict the capital markets, the advanced hybrid methods have not been performed in deep networks with a good forecasting accuracy. The results were compared with 8 baseline methods and indicate that the performance of the proposed method is significantly better than other baselines. For daily forecasts of stocks changes, it resulted in 19.80% improvement in the prediction accuracy, compared with the deep CNN, and 24.50% and 23.94% improvement compared with the models developed by Nguyen et al. (2015) and Derakhshan et al. (2019), respectively. Manuscript profile
      • Open Access Article

        2 - Multi-level ternary quantization for improving sparsity and computation in embedded deep neural networks
        Hosna Manavi Mofrad Seyed Ali ansarmohammadi Mostafa Salehi
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and co More
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and computational resources. Network compression techniques can overcome these challenges. Quantization and pruning methods are the most important compression techniques among them. One of the famous quantization methods in DNNs is the multi-level binary quantization, which not only exploits simple bit-wise logical operations, but also reduces the accuracy gap between binary neural networks and full precision DNNs. Since, multi-level binary can’t represent the zero value, this quantization does’nt take advantage of sparsity. On the other hand, it has been shown that DNNs are sparse, and by pruning the parameters of the DNNs, the amount of data storage in memory is reduced while computation speedup is also achieved. In this paper, we propose a pruning and quantization-aware training method for multi-level ternary quantization that takes advantage of both multi-level quantization and data sparsity. In addition to increasing the accuracy of the network compared to the binary multi-level networks, it gives the network the ability to be sparse. To save memory size and computation complexity, we increase the sparsity in the quantized network by pruning until the accuracy loss is negligible. The results show that the potential speedup of computation for our model at the bit and word-level sparsity can be increased by 15x and 45x compared to the basic multi-level binary networks. Manuscript profile
      • Open Access Article

        3 - Multi-Level Ternary Quantization for Improving Sparsity and Computation in Embedded Deep Neural Networks
        Hosna Manavi Mofrad ali ansarmohammadi Mostafa Salehi
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and co More
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and computational resources. Network compression techniques can overcome these challenges. Quantization and pruning methods are the most important compression techniques among them. One of the famous quantization methods in DNNs is the multi-level binary quantization, which not only exploits simple bit-wise logical operations, but also reduces the accuracy gap between binary neural networks and full precision DNNs. Since, multi-level binary can’t represent the zero value, this quantization does not take advantage of sparsity. On the other hand, it has been shown that DNNs are sparse, and by pruning the parameters of the DNNs, the amount of data storage in memory is reduced while computation speedup is also achieved. In this paper, we propose a pruning and quantization-aware training method for multi-level ternary quantization that takes advantage of both multi-level quantization and data sparsity. In addition to increasing the accuracy of the network compared to the binary multi-level networks, it gives the network the ability to be sparse. To save memory size and computation complexity, we increase the sparsity in the quantized network by pruning until the accuracy loss is negligible. The results show that the potential speedup of computation for our model at the bit and word-level sparsity can be increased by 15x and 45x compared to the basic multi-level binary networks. Manuscript profile