In last years, many automatic systems have been designed for text summarization and extracting ke... more In last years, many automatic systems have been designed for text summarization and extracting key phrases, but still no system has been suggested for extracting key points. According to our definition, key points are high-level concepts extractable from a text that consist of words that may not necessarily exist in the original text. In this paper, we try to design an automatic system for extracting key points by using lexical chains. In this system, we use FrameNet for shallow semantic parsing of texts. As the final attempt, we present the set of tuples that contain important concepts of an original text with the related semantic roles. With use of generalization from parts onto whole, we can then have the claim of extracting a higher-level concept, which stands for a key point. Comparing the output of this system with human abstract, we perceived that 42 percent of cases generated by this system are similar to those generated by human.
Background: The algorithmic classification of infected and healthy individuals by gene expression... more Background: The algorithmic classification of infected and healthy individuals by gene expression has been a topic of interest to researchers in numerous domains, including cancer. Several studies have presented numerous solutions, such as neural networks and support vector machines (SVMs), to classify a diverse range of cancer cases. Such classifications have provided some degrees of accuracy, which highly depend on optimization approaches and suitable kernels. Objectives: This study aimed at proposing a method to classify cancer-prone and healthy cases under breast cancer and colorectal cancer (CRC), using machine learning methods efficiently, increasing the accuracy of the classification process. Methods: This study presented an algorithm to diagnose individuals prone to breast cancer and CRC. The novelty of this algorithm lies in its suitable kernel and the feature extraction approach. By the application of this algorithm, this study first identified the genes closely associated...
International Journal of Reliability, Risk and Safety: Theory and Application
Numerous methods have been introduced to predict the reliability of software. In general, these m... more Numerous methods have been introduced to predict the reliability of software. In general, these methods can be divided into two main categories, namely parametric (e.g. software reliability growth models) and non-parametric (e.g. neural networks). Both approaches have been successfully implemented in software testing applications over the past four decades. Since most software reliability prediction data are available in the form of time series, deep recurrent network models (e.g. RNN, LSTM, NARX, and LSTM Encoder-Decoder networks) are considered as powerful tools to be employed in reliability-related problems. However, the problem of overfitting is a major concern when using deep neural networks for software reliability applications. To address this issue, we propose the use of dropout; therefore, this study utilizes a deep learning model based on LSTM Encoder-Decoder Dropout to predict the number of faults in software and assess software reliability. Experimental results show that the proposed model has better prediction performance compared with other RNN-based models.
International Symposium on Algorithms and Computation, Apr 1, 2016
In this paper we restrict every set splitting problem to the special case in which every set has ... more In this paper we restrict every set splitting problem to the special case in which every set has just three elements. This restricted version is also NP-complete. Then, we introduce a general conversion from any set splitting problem to 3-set splitting. Then we introduce a randomize algorithm, and we use Markov chain model for run time complexity analysis of this algorithm. In the last section of this paper we introduce "Fast Split" algorithm.
Modeling and analysis of future prices has been hot topic for economic analysts in recent years. ... more Modeling and analysis of future prices has been hot topic for economic analysts in recent years. Traditionally, the complex movements in the prices are usually taken as random or stochastic process. However, they may be produced by a deterministic nonlinear process. Accuracy and efficiency of economic models in the short period forecasting is strategic and crucial for business world. Nonlinear models are efficient enough and suitable for short time forecasting. So notable attempts is devoted on understanding different economic time series' and nonlinear dynamical models that can fit them. In this paper, it is tried to investigate Tehran stock exchange index time series. It is assumed. So, the Correlation Dimension (CD), the Hurst Exponent, and the Largest Lyapunov Exponent (LLE) of the time series are calculated. It is shown that the time series corresponding to Tehran stock Exchange index is nonlinear. The analyses of the results show enough evidence to accept the conjecture of existence chaotic behavior in Tehran stock exchange index.
One of the key pillars of any operating system is its proper software performance. Software failu... more One of the key pillars of any operating system is its proper software performance. Software failure can have dangerous effects and consequences and can lead to adverse and undesirable events in the design or use phases. The goal of this study is to identify and evaluate the most significant software risks based on the FMEA indices with respect to reduce the risk level by means of experts’ opinions. To this end, TOPSIS as one of the most applicable methods of prioritizing and ordering the significance of events has been used. Since uncertainty in the data is inevitable, the entropy principle has been applied with the help of fuzzy theory to overcome this problem to weigh the specified indices. The applicability and effectiveness of the proposed approach is validated through a real case study risk analysis of an Air/Space software system. The results show that the proposed approach is valid and can provide valuable and effective information in assisting risk management decision making...
Community detection is one of the important topics regarding complex network study. There are man... more Community detection is one of the important topics regarding complex network study. There are many community detection algorithms such as Streaming Community Detection Algorithm (SCoDA) and Order Statistics Local Optimization Method (OSLOM). However, the performance of these algorithms, in overlap communities and communities with ambiguous structure, is problematic. In community detection algorithms achieving accurate results is a challenge. In this paper, we’ve proposed a method based on finding maximal cliques and generating the corresponding graph in order to use as an input to SCoDA and OSLOM algorithms. Synthetic non-overlap and overlap graphs and real graphs data are used in our experiments. F1score and NM1 score functions are utilized as our evaluation criteria. We have shown that the improved version of SCoDA demonstrated better results in comparison to the original SCoDA algorithm, and the improved version of OSLOM was also superior in performance when compared with the ori...
In this paper, we focus on improving the performance of recommender systems. To do this, we propo... more In this paper, we focus on improving the performance of recommender systems. To do this, we propose a new algorithm named PBloofI which is a kind of hierarchical bloom filter. Actually, the Bloom filter is an array-based technique for showing the items’ features. Since the feature vectors of items are sparse, the Bloom filter reduces the space usage by using the hashing technique. And also, to reduce the time complexity we used the hierarchical version of bloom filter which is based on B+ tree of order d. Since Bloom filters can make a tradeoff between space and time, proposing a new hierarchical Bloom filter causes a remarkable reduction in space and time complexity of recommender systems. To increase the accuracy of the recommender systems we use Probabilistic version of hierarchical Bloom filter. By measuring the accuracy of the algorithm we show that the proposed algorithms not only decrease the time complexity but also have no significant effect on accuracy.
International Journal of Engineering and Technology, 2020
The development of IoT and cloud data storage has enabled several objects to access the internet ... more The development of IoT and cloud data storage has enabled several objects to access the internet through radio-frequency identification (RFID) systems. Currently, the use of the cloud as a backend server is considered as a viable and cost-effective option. The connection between reader and server, on one hand, and the connection between tag and reader, on the other hand, need to be secure and private. The wired connection between tag and reader is now a wireless cloud-based system and calls for stricter security requirements. Not only the connection is insecure, but the cloud-provider which has full access to the tag and reader's confidential information is not necessarily trusted. In this paper, after analyzing previously proposed authentication protocols in RFID systems, a protocol was chosen to be scrutinized in this research. Owing to the features provided by the Scyther tool, the proposed protocol did not have the required privacy and security properties. By reducing the computational overhead between tag and tag reader, and increasing the protocol's resistance to desynchronized and reply attacks, in the present study, we attempted to further improve the protocol's shortcomings in privacy and security.
Background: Intra-tumor heterogeneity is known to contribute to cancer complexity and drug resist... more Background: Intra-tumor heterogeneity is known to contribute to cancer complexity and drug resistance. Understanding the number of distinct subclones and the evolutionary relationships between them is scientifically and clinically very important and still a challenging problem. Results: In this paper, we present BAMSE (BAyesian Model Selection for tumor Evolution), a new probabilistic method for inferring subclonal history and lineage tree reconstruction of heterogeneous tumor samples. BAMSE uses somatic mutation read counts as input and can leverage multiple tumor samples accurately and efficiently. In the first step, possible clusterings of mutations into subclones are scored and a user defined number are selected for further analysis. In the next step, for each of these candidates, a list of trees describing the evolutionary relationships between the subclones is generated. These trees are sorted by their posterior probability. The posterior probability is calculated using a Bayesian model that integrates prior belief about the number of subclones, the composition of the tumor and the process of subclonal evolution. BAMSE also takes the sequencing error into account. We benchmarked BAMSE against state of the art software using simulated datasets. Conclusions: In this work we developed a flexible and fast software to reconstruct the history of a tumor's subclonal evolution using somatic mutation read counts across multiple samples.
International Journal in Foundations of Computer Science & Technology, 2014
In this paper, we propose a new analysis for randomized 2-SAT and 3-SAT algorithms, and show that... more In this paper, we propose a new analysis for randomized 2-SAT and 3-SAT algorithms, and show that we could determine more precise boundaries for transition probability of Markov chain using Karnaugh map. In our analysis we will show the probability that the selected literal has been flipped correctly is so close to 3 2 and 7 4 , respectively for 2-SAT and 3-SAT with large number of variables. Then we will extend our result to k-SAT and show that both transition probability of Markov chain in randomized algorithm for k-SAT approaches to 0.5. Finally we use this result to determine the probability and complexity of finding the satisfying assignment for randomized k-SAT algorithm. It will be shown that the probability of finding 1 is the conjunction of m clauses m
Communications in Computer and Information Science, 2011
Mobile language learning games usually only focus on spelling or out of context meaning for the e... more Mobile language learning games usually only focus on spelling or out of context meaning for the entire dictionary, ignoring the role of an authentic environment. 'Detective Alavi' is an educational mobile game that provides a shared space for students to work collaboratively towards language learning in a narrative rich environment. This game motivates and preserves a conversation between learners and their teachers, and also between learners and learners, whilst being immersed in the story of the game. A seamless selfassessment scoring system in the game structure provides a less dominating environment for students to expose their weaknesses, at the same time assists students to judge what skills they have learned and how much. This game has revealed improvement in different cognitive processes and deeper level of learning during the collaborative game play.
Finding the most influential people is an NP-hard problem that has attracted many researchers in ... more Finding the most influential people is an NP-hard problem that has attracted many researchers in the field of social networks. The problem is also known as influence maximization and aims to find a number of people that are able to maximize the spread of influence through a target social network. In this paper, a new algorithm based on the linear threshold model of influence maximization is proposed. The main benefit of the algorithm is that it reduces the number of investigated nodes without loss of quality to decrease its execution time. Our experimental results based on two well-known datasets show that the proposed algorithm is much faster and at the same time more efficient than the state of the art algorithms.
Our goal of this study was to reconstruct a ''genome-scale co-expression network'' and find impor... more Our goal of this study was to reconstruct a ''genome-scale co-expression network'' and find important modules in lung adenocarcinoma so that we could identify the genes involved in lung adenocarcinoma. We integrated gene mutation, GWAS, CGH, array-CGH and SNP array data in order to identify important genes and loci in genome-scale. Afterwards, on the basis of the identified genes a co-expression network was reconstructed from the co-expression data. The reconstructed network was named ''genome-scale co-expression network''. As the next step, 23 key modules were disclosed through clustering. In this study a number of genes have been identified for the first time to be implicated in lung adenocarcinoma by analyzing the modules. The genes EGFR,
International Journal of Enterprise Information Systems, 2013
The success of a virtual organization (VO) largely depends on the effective collaboration of its ... more The success of a virtual organization (VO) largely depends on the effective collaboration of its members in orchestrating their knowledge, skills, core competences and resources. It enables the VOs to enhance competitive capabilities and respond better to business opportunities. Most previous studies have focused on the influence of some contingent factors on Knowledge Management (KM) process/ outcome and neglect the interacting relationships among these contingent factors. The objective of this study is to explore the interacting patterns among a VO enabling capabilities from a covariation perspective on Knowledge Management Performance (KMP). The authors introduce the concept of “consistency” which helps them investigate the impact of the VO enabling capabilities on KMP. The results from a field survey of 95 Iranian IT/ engineering firms which implement KM processes show that the “consistency” among these enabling capabilities has a significant positive impact on KMP. The novel po...
... [18] R. Nassiri,A. Moeini, The Essential of Business Software modeling in Successful ERP impl... more ... [18] R. Nassiri,A. Moeini, The Essential of Business Software modeling in Successful ERP implementation, Proceedings of the 2007 Int'l conference on Software ... [22] UML 2.0 Diagram Interchange Specification, OMG, September 2003. [23] UML 2.0, Martin Fowler, 2006
international symposium on algorithms and computation, Mar 24, 2016
In this paper we proposed a Cellular Automaton based local algorithm to solve the autonomously se... more In this paper we proposed a Cellular Automaton based local algorithm to solve the autonomously sensor gathering problem in Mobile Wireless Sensor Networks (MWSN). In this problem initially the connected mobile sensors deployed in the network and goal is gather all sensors into one location. The sensors decide to move only based on their local information. Cellular Automaton (CA) as dynamical systems in which space and time are discrete and rules are local, is proper candidate to simulate and analyze the problem. Using CA presents a better understanding of the problem.
Objective: The purpose of the present study is to identify priority indicators -based on the qual... more Objective: The purpose of the present study is to identify priority indicators -based on the qualitative analysis of the results of the research carried out- and also to provide a framework for measuring the performance of the information security service supply chain. Methods: The methodology of this research is essentially descriptive and qualitative and has been carried out in two stages. In the first phase, 133 papers were reviewed, after evaluation, 28 articles were approved. Then, by using the CASP method, finally, 15 articles on the measurement of the performance of the service chain received the minimum score for conducting a qualitative analysis of the content. By reviewing these articles, a conceptual framework for measuring the performance of the service chain was presented in the form of a "logic model". This model is a tool that illustrates the logic of doing things in the four components of inputs, processes, outputs and outcomes. After verifying the reliabil...
Cloud computing introduces new capabilities to organizations such as cost efficiency, scalability... more Cloud computing introduces new capabilities to organizations such as cost efficiency, scalability, access to global markets, ease of use, flexibility and rapid adaptability against environmental changes. Cloud computing provides an important role in organizational innovation and agility. In spite of great opportunities that this technology brings to organization, in many of organizations especially in developing countries, adoption and migration rate is low. A major problem is that in previous studies of cloud computing adoption, limited aspects have been taken into consideration. In fact, a comprehensive framework that includes all affecting factors didn't develop. In order to address this issue, this research uses systematic literature review of previous selected research and meta-synthesis approach to qualitative analyzing and synthesizing different factors that affect the adoption of cloud computing. Finally we propose a comprehensive framework. In this research after system...
In last years, many automatic systems have been designed for text summarization and extracting ke... more In last years, many automatic systems have been designed for text summarization and extracting key phrases, but still no system has been suggested for extracting key points. According to our definition, key points are high-level concepts extractable from a text that consist of words that may not necessarily exist in the original text. In this paper, we try to design an automatic system for extracting key points by using lexical chains. In this system, we use FrameNet for shallow semantic parsing of texts. As the final attempt, we present the set of tuples that contain important concepts of an original text with the related semantic roles. With use of generalization from parts onto whole, we can then have the claim of extracting a higher-level concept, which stands for a key point. Comparing the output of this system with human abstract, we perceived that 42 percent of cases generated by this system are similar to those generated by human.
Background: The algorithmic classification of infected and healthy individuals by gene expression... more Background: The algorithmic classification of infected and healthy individuals by gene expression has been a topic of interest to researchers in numerous domains, including cancer. Several studies have presented numerous solutions, such as neural networks and support vector machines (SVMs), to classify a diverse range of cancer cases. Such classifications have provided some degrees of accuracy, which highly depend on optimization approaches and suitable kernels. Objectives: This study aimed at proposing a method to classify cancer-prone and healthy cases under breast cancer and colorectal cancer (CRC), using machine learning methods efficiently, increasing the accuracy of the classification process. Methods: This study presented an algorithm to diagnose individuals prone to breast cancer and CRC. The novelty of this algorithm lies in its suitable kernel and the feature extraction approach. By the application of this algorithm, this study first identified the genes closely associated...
International Journal of Reliability, Risk and Safety: Theory and Application
Numerous methods have been introduced to predict the reliability of software. In general, these m... more Numerous methods have been introduced to predict the reliability of software. In general, these methods can be divided into two main categories, namely parametric (e.g. software reliability growth models) and non-parametric (e.g. neural networks). Both approaches have been successfully implemented in software testing applications over the past four decades. Since most software reliability prediction data are available in the form of time series, deep recurrent network models (e.g. RNN, LSTM, NARX, and LSTM Encoder-Decoder networks) are considered as powerful tools to be employed in reliability-related problems. However, the problem of overfitting is a major concern when using deep neural networks for software reliability applications. To address this issue, we propose the use of dropout; therefore, this study utilizes a deep learning model based on LSTM Encoder-Decoder Dropout to predict the number of faults in software and assess software reliability. Experimental results show that the proposed model has better prediction performance compared with other RNN-based models.
International Symposium on Algorithms and Computation, Apr 1, 2016
In this paper we restrict every set splitting problem to the special case in which every set has ... more In this paper we restrict every set splitting problem to the special case in which every set has just three elements. This restricted version is also NP-complete. Then, we introduce a general conversion from any set splitting problem to 3-set splitting. Then we introduce a randomize algorithm, and we use Markov chain model for run time complexity analysis of this algorithm. In the last section of this paper we introduce "Fast Split" algorithm.
Modeling and analysis of future prices has been hot topic for economic analysts in recent years. ... more Modeling and analysis of future prices has been hot topic for economic analysts in recent years. Traditionally, the complex movements in the prices are usually taken as random or stochastic process. However, they may be produced by a deterministic nonlinear process. Accuracy and efficiency of economic models in the short period forecasting is strategic and crucial for business world. Nonlinear models are efficient enough and suitable for short time forecasting. So notable attempts is devoted on understanding different economic time series' and nonlinear dynamical models that can fit them. In this paper, it is tried to investigate Tehran stock exchange index time series. It is assumed. So, the Correlation Dimension (CD), the Hurst Exponent, and the Largest Lyapunov Exponent (LLE) of the time series are calculated. It is shown that the time series corresponding to Tehran stock Exchange index is nonlinear. The analyses of the results show enough evidence to accept the conjecture of existence chaotic behavior in Tehran stock exchange index.
One of the key pillars of any operating system is its proper software performance. Software failu... more One of the key pillars of any operating system is its proper software performance. Software failure can have dangerous effects and consequences and can lead to adverse and undesirable events in the design or use phases. The goal of this study is to identify and evaluate the most significant software risks based on the FMEA indices with respect to reduce the risk level by means of experts’ opinions. To this end, TOPSIS as one of the most applicable methods of prioritizing and ordering the significance of events has been used. Since uncertainty in the data is inevitable, the entropy principle has been applied with the help of fuzzy theory to overcome this problem to weigh the specified indices. The applicability and effectiveness of the proposed approach is validated through a real case study risk analysis of an Air/Space software system. The results show that the proposed approach is valid and can provide valuable and effective information in assisting risk management decision making...
Community detection is one of the important topics regarding complex network study. There are man... more Community detection is one of the important topics regarding complex network study. There are many community detection algorithms such as Streaming Community Detection Algorithm (SCoDA) and Order Statistics Local Optimization Method (OSLOM). However, the performance of these algorithms, in overlap communities and communities with ambiguous structure, is problematic. In community detection algorithms achieving accurate results is a challenge. In this paper, we’ve proposed a method based on finding maximal cliques and generating the corresponding graph in order to use as an input to SCoDA and OSLOM algorithms. Synthetic non-overlap and overlap graphs and real graphs data are used in our experiments. F1score and NM1 score functions are utilized as our evaluation criteria. We have shown that the improved version of SCoDA demonstrated better results in comparison to the original SCoDA algorithm, and the improved version of OSLOM was also superior in performance when compared with the ori...
In this paper, we focus on improving the performance of recommender systems. To do this, we propo... more In this paper, we focus on improving the performance of recommender systems. To do this, we propose a new algorithm named PBloofI which is a kind of hierarchical bloom filter. Actually, the Bloom filter is an array-based technique for showing the items’ features. Since the feature vectors of items are sparse, the Bloom filter reduces the space usage by using the hashing technique. And also, to reduce the time complexity we used the hierarchical version of bloom filter which is based on B+ tree of order d. Since Bloom filters can make a tradeoff between space and time, proposing a new hierarchical Bloom filter causes a remarkable reduction in space and time complexity of recommender systems. To increase the accuracy of the recommender systems we use Probabilistic version of hierarchical Bloom filter. By measuring the accuracy of the algorithm we show that the proposed algorithms not only decrease the time complexity but also have no significant effect on accuracy.
International Journal of Engineering and Technology, 2020
The development of IoT and cloud data storage has enabled several objects to access the internet ... more The development of IoT and cloud data storage has enabled several objects to access the internet through radio-frequency identification (RFID) systems. Currently, the use of the cloud as a backend server is considered as a viable and cost-effective option. The connection between reader and server, on one hand, and the connection between tag and reader, on the other hand, need to be secure and private. The wired connection between tag and reader is now a wireless cloud-based system and calls for stricter security requirements. Not only the connection is insecure, but the cloud-provider which has full access to the tag and reader's confidential information is not necessarily trusted. In this paper, after analyzing previously proposed authentication protocols in RFID systems, a protocol was chosen to be scrutinized in this research. Owing to the features provided by the Scyther tool, the proposed protocol did not have the required privacy and security properties. By reducing the computational overhead between tag and tag reader, and increasing the protocol's resistance to desynchronized and reply attacks, in the present study, we attempted to further improve the protocol's shortcomings in privacy and security.
Background: Intra-tumor heterogeneity is known to contribute to cancer complexity and drug resist... more Background: Intra-tumor heterogeneity is known to contribute to cancer complexity and drug resistance. Understanding the number of distinct subclones and the evolutionary relationships between them is scientifically and clinically very important and still a challenging problem. Results: In this paper, we present BAMSE (BAyesian Model Selection for tumor Evolution), a new probabilistic method for inferring subclonal history and lineage tree reconstruction of heterogeneous tumor samples. BAMSE uses somatic mutation read counts as input and can leverage multiple tumor samples accurately and efficiently. In the first step, possible clusterings of mutations into subclones are scored and a user defined number are selected for further analysis. In the next step, for each of these candidates, a list of trees describing the evolutionary relationships between the subclones is generated. These trees are sorted by their posterior probability. The posterior probability is calculated using a Bayesian model that integrates prior belief about the number of subclones, the composition of the tumor and the process of subclonal evolution. BAMSE also takes the sequencing error into account. We benchmarked BAMSE against state of the art software using simulated datasets. Conclusions: In this work we developed a flexible and fast software to reconstruct the history of a tumor's subclonal evolution using somatic mutation read counts across multiple samples.
International Journal in Foundations of Computer Science & Technology, 2014
In this paper, we propose a new analysis for randomized 2-SAT and 3-SAT algorithms, and show that... more In this paper, we propose a new analysis for randomized 2-SAT and 3-SAT algorithms, and show that we could determine more precise boundaries for transition probability of Markov chain using Karnaugh map. In our analysis we will show the probability that the selected literal has been flipped correctly is so close to 3 2 and 7 4 , respectively for 2-SAT and 3-SAT with large number of variables. Then we will extend our result to k-SAT and show that both transition probability of Markov chain in randomized algorithm for k-SAT approaches to 0.5. Finally we use this result to determine the probability and complexity of finding the satisfying assignment for randomized k-SAT algorithm. It will be shown that the probability of finding 1 is the conjunction of m clauses m
Communications in Computer and Information Science, 2011
Mobile language learning games usually only focus on spelling or out of context meaning for the e... more Mobile language learning games usually only focus on spelling or out of context meaning for the entire dictionary, ignoring the role of an authentic environment. 'Detective Alavi' is an educational mobile game that provides a shared space for students to work collaboratively towards language learning in a narrative rich environment. This game motivates and preserves a conversation between learners and their teachers, and also between learners and learners, whilst being immersed in the story of the game. A seamless selfassessment scoring system in the game structure provides a less dominating environment for students to expose their weaknesses, at the same time assists students to judge what skills they have learned and how much. This game has revealed improvement in different cognitive processes and deeper level of learning during the collaborative game play.
Finding the most influential people is an NP-hard problem that has attracted many researchers in ... more Finding the most influential people is an NP-hard problem that has attracted many researchers in the field of social networks. The problem is also known as influence maximization and aims to find a number of people that are able to maximize the spread of influence through a target social network. In this paper, a new algorithm based on the linear threshold model of influence maximization is proposed. The main benefit of the algorithm is that it reduces the number of investigated nodes without loss of quality to decrease its execution time. Our experimental results based on two well-known datasets show that the proposed algorithm is much faster and at the same time more efficient than the state of the art algorithms.
Our goal of this study was to reconstruct a ''genome-scale co-expression network'' and find impor... more Our goal of this study was to reconstruct a ''genome-scale co-expression network'' and find important modules in lung adenocarcinoma so that we could identify the genes involved in lung adenocarcinoma. We integrated gene mutation, GWAS, CGH, array-CGH and SNP array data in order to identify important genes and loci in genome-scale. Afterwards, on the basis of the identified genes a co-expression network was reconstructed from the co-expression data. The reconstructed network was named ''genome-scale co-expression network''. As the next step, 23 key modules were disclosed through clustering. In this study a number of genes have been identified for the first time to be implicated in lung adenocarcinoma by analyzing the modules. The genes EGFR,
International Journal of Enterprise Information Systems, 2013
The success of a virtual organization (VO) largely depends on the effective collaboration of its ... more The success of a virtual organization (VO) largely depends on the effective collaboration of its members in orchestrating their knowledge, skills, core competences and resources. It enables the VOs to enhance competitive capabilities and respond better to business opportunities. Most previous studies have focused on the influence of some contingent factors on Knowledge Management (KM) process/ outcome and neglect the interacting relationships among these contingent factors. The objective of this study is to explore the interacting patterns among a VO enabling capabilities from a covariation perspective on Knowledge Management Performance (KMP). The authors introduce the concept of “consistency” which helps them investigate the impact of the VO enabling capabilities on KMP. The results from a field survey of 95 Iranian IT/ engineering firms which implement KM processes show that the “consistency” among these enabling capabilities has a significant positive impact on KMP. The novel po...
... [18] R. Nassiri,A. Moeini, The Essential of Business Software modeling in Successful ERP impl... more ... [18] R. Nassiri,A. Moeini, The Essential of Business Software modeling in Successful ERP implementation, Proceedings of the 2007 Int'l conference on Software ... [22] UML 2.0 Diagram Interchange Specification, OMG, September 2003. [23] UML 2.0, Martin Fowler, 2006
international symposium on algorithms and computation, Mar 24, 2016
In this paper we proposed a Cellular Automaton based local algorithm to solve the autonomously se... more In this paper we proposed a Cellular Automaton based local algorithm to solve the autonomously sensor gathering problem in Mobile Wireless Sensor Networks (MWSN). In this problem initially the connected mobile sensors deployed in the network and goal is gather all sensors into one location. The sensors decide to move only based on their local information. Cellular Automaton (CA) as dynamical systems in which space and time are discrete and rules are local, is proper candidate to simulate and analyze the problem. Using CA presents a better understanding of the problem.
Objective: The purpose of the present study is to identify priority indicators -based on the qual... more Objective: The purpose of the present study is to identify priority indicators -based on the qualitative analysis of the results of the research carried out- and also to provide a framework for measuring the performance of the information security service supply chain. Methods: The methodology of this research is essentially descriptive and qualitative and has been carried out in two stages. In the first phase, 133 papers were reviewed, after evaluation, 28 articles were approved. Then, by using the CASP method, finally, 15 articles on the measurement of the performance of the service chain received the minimum score for conducting a qualitative analysis of the content. By reviewing these articles, a conceptual framework for measuring the performance of the service chain was presented in the form of a "logic model". This model is a tool that illustrates the logic of doing things in the four components of inputs, processes, outputs and outcomes. After verifying the reliabil...
Cloud computing introduces new capabilities to organizations such as cost efficiency, scalability... more Cloud computing introduces new capabilities to organizations such as cost efficiency, scalability, access to global markets, ease of use, flexibility and rapid adaptability against environmental changes. Cloud computing provides an important role in organizational innovation and agility. In spite of great opportunities that this technology brings to organization, in many of organizations especially in developing countries, adoption and migration rate is low. A major problem is that in previous studies of cloud computing adoption, limited aspects have been taken into consideration. In fact, a comprehensive framework that includes all affecting factors didn't develop. In order to address this issue, this research uses systematic literature review of previous selected research and meta-synthesis approach to qualitative analyzing and synthesizing different factors that affect the adoption of cloud computing. Finally we propose a comprehensive framework. In this research after system...
Uploads
Papers by Ali Moeini