Papers by Hadi Tabatabaee Malazi
arXiv (Cornell University), Nov 4, 2019
One of the significant problem in peer-to-peer databases is collision problem. These databases do... more One of the significant problem in peer-to-peer databases is collision problem. These databases do not rely on a central leader that is a reason to increase scalability and fault tolerance. Utilizing these systems in high throughput computing cause more flexibility in computing system and meanwhile solve the problems in most of the computing systems which are depend on a central nodes. There are limited researches in this scope and they seem are not suitable for using in a large scale. In this paper, we used Cassandra which is a distributed database based on peer-to-peer network as a high throughput computing system. Cassandra uses Paxos to elect central leader by default that causes collision problem. Among existent consensus algorithms Raft separates the key elements of consensus, such as leader election, so enforces a stronger degree of coherency to reduce the number of states that must be considered, such as collision.
IEEE communications standards magazine, Jun 1, 2023
Recent advances in vehicular technologies illuminate the significance of vehicular networking, wh... more Recent advances in vehicular technologies illuminate the significance of vehicular networking, where information dissemination between constituent network elements plays a crucial role. Information sharing in vehicular networks is challenging because mobility introduces a dynamic environment with overwhelming data traffic. Information-Centric Networking (ICN) reduces overheads by in-network caching and name-oriented communication to mitigate these challenges. However, network resources can be used more efficiently if both RSUs and vehicles are managed autonomously. This paper proposes autonomous networking to improve information dissemination in vehicular ICN by using available in-network contextual information. First, a Segment-aware ICN (SA-ICN) scheme is proposed by developing a new dynamic namespace convention. Then, we extend SA-ICN by proposing a Segment and Provider-aware Gossiping enabled ICN (SPG-ICN) that leverages a gossip protocol to self-configure and self-optimize the network using content providers. The simulation results demonstrate performance improvements compared to existing content dissemination schemes regarding packet overheads, data delivery delay, and network load.
IEEE Communications Standards Magazine, 2023
Recent advances in vehicular technologies illuminate the significance of vehicular networking, wh... more Recent advances in vehicular technologies illuminate the significance of vehicular networking, where information dissemination between constituent network elements plays a crucial role. Information sharing in vehicular networks is challenging because mobility introduces a dynamic environment with overwhelming data traffic. Information-Centric Networking (ICN) reduces overheads by in-network caching and name-oriented communication to mitigate these challenges. However, network resources can be used more efficiently if both RSUs and vehicles are managed autonomously. This paper proposes autonomous networking to improve information dissemination in vehicular ICN by using available in-network contextual information. First, a Segment-aware ICN (SA-ICN) scheme is proposed by developing a new dynamic namespace convention. Then, we extend SA-ICN by proposing a Segment and Provider-aware Gossiping enabled ICN (SPG-ICN) that leverages a gossip protocol to self-configure and self-optimize the network using content providers. The simulation results demonstrate performance improvements compared to existing content dissemination schemes regarding packet overheads, data delivery delay, and network load.
IEEE Computer, 2022
The Internet of Things (IoT) is considered a key enabling technology for circular cities. Using I... more The Internet of Things (IoT) is considered a key enabling technology for circular cities. Using IoT for Intelligent Transportation Systems (ITS) can help improve driving safety while reducing fuel consumption, congestion, and travel time. However, leveraging IoT for ITS requires overcoming the interoperability challenges that arise due to the heterogeneous nature of IoT deployments, as they are based on multifarious architectures, middleware platforms, and data models. This heterogeneity hinders information exchange across different systems in smart and circular cities. This paper discusses the recent developments in the area of interoperable IoT. A smart traffic use case is presented to deliver data-driven transportation services. The proposed system exploits multiple real-world IoT devices and legacy traffic systems deployed by the transportation authorities. Initial results obtained by exploiting heterogeneous traffic data streams reveal interesting traffic patterns for different vehicle types.
Simulation Modelling Practice and Theory, 2023
The extensive deployment of the Internet of things (IoT) devices in urban areas enables smart cit... more The extensive deployment of the Internet of things (IoT) devices in urban areas enables smart city systems (e.g., urban traffic detection and air pollution monitoring) to use these devices to discover the affected regions by various events, particularly complex ones. A complex event is inferred from patterns of different individual primitive events according to their spatio-temporal information. Complex event processing (CEP) systems facilitate applications to define their reasoning rules based on event processing languages (EPL), similar to SQL. However, current languages and CEP engines do not efficiently support spatial data characteristics. Geospatial CEP is challenging since applications have diverse definitions for complex events (e.g., air quality index), IoT devices have heterogeneous specifications/capabilities, and complex events may happen concurrently. This paper proposes the Geo-Tesla language that enables smart city applications to define spatial attributes in complex event definitions and reasoning rules. Then, we devise the GeoT-Rex CEP engine that implements geospatial operations that compiles Geo-Tesla language and processes the spatial data. The integration of Geo-Tesla and GeoT-Rex enables applications to process complex events with spatial characteristics in their reasoning rules and identify boundaries of the detected complex event (event footprint). The evaluation results from the proof of concept implementation show that our proposed solution outperforms in identifying the footprint of complex events up to 44% in the largest network size. They also show that the detection percentage in the concurrent appearance of complex events is 40% more on average than a close state-of-the-art baseline.
Simulation Modelling Practice and Theory, 2018
The widespread use of sensor nodes that are operating under the Internet of Things paradigm motiv... more The widespread use of sensor nodes that are operating under the Internet of Things paradigm motivates researcher to step forward and build reliable systems capable of detecting their faulty nodes. These nodes lead to decrease in the accuracy and functionality of the networks, which finally result in quality degradation of network services. From the temporal point of view, the faults can be either permanent or intermittent. The detection of the latter one is more challenging since the nodes show contradictory behaviors at different times. from the topological point of view, the mobility of the sensor nodes is an intrinsic characteristic in many IoT-based applications, where numerous mobile nodes are managed by static overlay nodes. The dynamics of these network introduces the second challenge in identifying faulty nodes. Several works have been conducted to address the problem, but there is a research gap in identifying hybrid soft sensor faults in the aforementioned networks. The focus of attention in this paper is the detection of soft faults in the sensing unit of the nodes. We devised a new method, called Hybrid Fault Detection in Mobile Sensors, to detect nodes with mixed permanent and intermittent faults. A software debugging approach inspired the main idea. We also applied data mining techniques such as DBSCAN and K-means to validate sensed data, and differentiate the classes of faults, respectively. We evaluated the devised method using the NS2 simulator in various situations. One of the outcomes of the method is that the mobility of the nodes does not reduce the accuracy, in contrast to most of the traditional methods. Moreover, the evaluation demonstrates promising results for the networks with more than 50% faulty nodes. The results also show perfect performance in detecting permanent and intermittent faults in the networks with various percentage of faulty nodes.
IEEE Access, 2022
The advent of new cloud-based applications such as mixed reality, online gaming, autonomous drivi... more The advent of new cloud-based applications such as mixed reality, online gaming, autonomous driving, and healthcare has introduced infrastructure management challenges to the underlying service network. Multi-access edge computing (MEC) extends the cloud computing paradigm and leverages servers near end-users at the network edge to provide a cloud-like environment. The optimum placement of services on edge servers plays a crucial role in the performance of such service-based applications. Dynamic service placement problem addresses the adaptive configuration of application services at edge servers to facilitate end-users and those devices that need to offload computation tasks. While reported approaches in the literature shed light on this problem from a particular perspective, a panoramic study of this problem reveals the research gaps in the big picture. This paper introduces the dynamic service placement problem and outline its relations with other problems such as task scheduling, resource management, and caching at the edge. We also present a systematic literature review of existing dynamic service placement methods for MEC environments from networking, middleware, applications, and evaluation perspectives. In the first step, we review different MEC architectures and their enabling technologies from a networking point of view. We also introduce different cache deployment solutions in network architectures and discuss their design considerations. The second step investigates dynamic service placement methods from a middleware viewpoint. We review different service packaging technologies and discuss their trade-offs. We also survey the methods and identify eight research directions that researchers follow. Our study categorises the research objectives into six main classes, proposing a taxonomy of design objectives for the dynamic service placement problem. We also investigate the reported methods and devise a solutions taxonomy comprising six criteria. In the third step, we concentrate on the application layer and introduce the applications that can take advantage of dynamic service placement. The fourth step investigates evaluation environments used to validate the solutions, including simulators and testbeds. We introduce real-world datasets such as edge server locations, mobility traces, and service requests used to evaluate the methods. We compile a list of open issues and challenges categorised by various viewpoints in the last step.
In recent years, energy efficiency and data aggregation is a major concern in many applications o... more In recent years, energy efficiency and data aggregation is a major concern in many applications of Wireless Sensor Networks (WSNs).WSNs, consists of a large number of sensor nodes. Each sensor node senses environmental phenomenon and sends the report to a sink node. Since the sensor nodes are powered by limited power batteries, so energy efficiency is a major challenge in WSNs applications. For this purpose, many novel innovative techniques are required to improve the energy efficiency. Data aggregation is one of the most important issues for achieving energy-efficiency in WSNs. The main goal of data aggregation is decreasing energy consumption by decreasing need for redundant data transmission. In this paper, we propose an analytical model for evaluating energy consumption in term of data transmission, reception and aggregation in cluster based WSNs architecture using M/M/1 queuing model. Analysis support the validity of the proposed approach networks with compare the energy consum...
Monitoring applications are one of the main usages of wireless sensor networks, where the sensor ... more Monitoring applications are one of the main usages of wireless sensor networks, where the sensor nodes are responsible to report any event of interest in the monitoring area. Due to their limited energy storage, the nodes are prone to fail, which may lead to network partitioning problem. To cope with this problem, the number of deployed sensor nodes in an area is more than the required quantity. The challenge is to turn on a minimal number of nodes to preserve network connectivity and area coverage. In this paper, we apply computational geometry techniques to introduce a new 2-phase algorithm, called Delaunay Based Connected Cover (DBCC), to find a connected cover in an omnidirectional wireless sensor network. In the first phase, the Delaunay triangulation of all sensors is computed and a minimal number of sensors is selected to ensure the coverage of the region. In the second phase, connectivity of the nodes is ensured. The devised method is simulated by NS2 and is compared with tw...
2021 IEEE International Conference on Services Computing (SCC), 2021
Multi-access edge computing (MEC) aims to execute mobile cloud services in edge servers located n... more Multi-access edge computing (MEC) aims to execute mobile cloud services in edge servers located near end-users to provide higher Quality of Experience (QoE). Centralised methods for dynamic service placement in MEC require central access to control every server, which is challenging where servers belong to different administrative domains. Other approaches use distributed decision-making, but most of them do not consider cooperation between servers. Dynamic, distributed service placement can potentially provide workload orchestration and higher QoE. However, this is challenging where service demand patterns are non-stationary. This paper proposes a multi-armed bandit-based method where servers cooperatively decide to host service replicas by applying reinforcement learning that uses service characteristics and context information to minimise response time and backhaul traffic. By sharing cached services, the servers achieve better workload distribution while autonomously making placement decisions. The simulations demonstrate improvements in response time and reduction in backhaul traffic compared to close baselines.
IEEE Access, 2021
Managing multi-tenant edge devices with heterogeneous capabilities scattered across an urban area... more Managing multi-tenant edge devices with heterogeneous capabilities scattered across an urban area requires significant communication and computing power, which is challenging when devices are also resource-constrained. These devices play a crucial role in smart city monitoring systems by notifying various municipal organizations about a wide range of ongoing complex events. Some recent approaches in complex event processing use a publish-subscribe architectural pattern to decouple simple event producers from complex event consumers. However, they did not fully address communication efficiency or diverse quality of service (QoS) requirements in aggregating events. This paper proposes a new architecture that integrates the publish-subscribe architectural pattern with software-defined network technology for urban monitoring applications. The architecture enhances monitoring applications with capabilities of distributed processing and detection of complex events. It also enables application developers to define QoS requirements and supports the TESLA complex event specification language. The main focus of our work is on energy and network efficiency. The simulation results demonstrate significant improvements in energy consumption and data packet traffic compared to three close baselines.
Information Processing & Management, 2019
The widespread popularity and worldwide application of social networks have raised interest in th... more The widespread popularity and worldwide application of social networks have raised interest in the analysis of content created on the networks. One such analytical application and aspect of social networks, including Twitter, is identifying the location of various political and social events, natural disasters and so on. The present study focuses on the localization of traffic accidents. Outdated and inaccurate information in user profiles, the absence of location data in tweet texts, and the limited number of geotagged posts are among the challenges tackled by location estimation. Adopting the Dempster–Shafer Evidence Theory, the present study estimates the location of accidents using a combination of user profiles, tweet texts, and the place attachments in tweets. The results indicate improved performance regarding error distance and average error distance compared to previously developed methods. The proposed method in this study resulted in a reduced error distance of 26%.
Knowledge and Information Systems, 2018
Social sensing is a new paradigm that inherits the main ideas of sensor networks and considers th... more Social sensing is a new paradigm that inherits the main ideas of sensor networks and considers the users as new sensor types. For instance, by the time the users find out that an event has happened, they start to share the related posts and express their feelings through the social networks. Consequently, these networks are becoming a powerful news media in a wide range of topics. Existing event detection methods mostly focus on either the keyword burst or sentiment of posts, and ignore some natural aspects of social networks such as the dynamic rate of arriving posts. In this paper, we devised Dynamic Social Event Detection approach that exploits a new dynamic windowing method. Besides, we add a mechanism to combine the sentiment of posts with the keywords burst in the dynamic windows. The combination of sentiment analysis and the frequently used keywords enhances our approach to detect events with a different level of user engagement. To analyze the behavior of the devised approach, we use a wide range of metrics including histogram of window sizes, sentiment oscillations of posts, topic recall, keyword precision, and keyword recall on two benchmarked datasets. One of the significant outcomes of the devised method is the topic recall of 100% for FA Cup dataset.
Simulation Modelling Practice and Theory, 2017
The growing trend in pervasive systems forces traditional wireless sensor networks to deal with n... more The growing trend in pervasive systems forces traditional wireless sensor networks to deal with new challenges, such as dynamic application requirements and heterogeneous networks. One of the latest paradigms in this area is software defined wireless sensor network. According to the paradigm, the networks take care of managing topological information and forwarding decisions using a bipartite architecture in which a control plane decides the forwarding policies and the data plane (i.e. ordinary sensor nodes) executes them. Unfortunately, in highly dynamic networks, this approach generates an overhead of control packet exchange between the ordinary nodes and the control plane, that leads to additional energy consumptions. This paper proposes a fuzzy logic based solution, called Fuzzy Topology Discovery Protocol (FTDP), to improve the efficiency of software defined wireless sensor networks. This work is designed according to the Software Defined Networking solution for WIreless SEnsor networks (SDN-WISE), which is an open source solution for software defined wireless sensor networks. The proposed work is one of the first attempts to use fuzzy theory in software defined based wireless sensor networks. The simulation results show that our approach can increase the lifetime of the network by 45% and decreases the packet loss ratio by 50% compared to the basic SDN-WISE solution.
Applied Intelligence, 2017
New healthcare technologies are emerging with the increasing age of the society, where the develo... more New healthcare technologies are emerging with the increasing age of the society, where the development of smart homes for monitoring the elders’ activities is in the center of them. Identifying the resident’s activities in an apartment is an important module in such systems. Dense sensing approach aims to embed sensors in the environment to report the detected events continuously. The events are segmented and analyzed via classifiers to identify the corresponding activity. Although several methods were introduced in recent years for detecting simple activities, the recognition of complex ones requires more effort. Due to the different time duration and event density of each activity, finding the best size of the segments is one of the challenges in detecting the activity. Also, using appropriate classifiers that are capable of detecting simple and interleaved activities is the other issue. In this paper, we devised a two-phase approach called CARER (Complex Activity Recognition using Emerging patterns and Random forest). In the first phase, the emerging patterns are mined, and various features of the activities are extracted to build a model using the Random Forest technique. In the second phase, the sequences of events are segmented dynamically by considering their recency and sensor correlation. Then, the segments are analyzed by the generated model from the previous phase to recognize both simple and complex activities. We examined the performance of the devised approach using the CASAS dataset. To do this, first we investigated several classifiers. The outcome showed that the combination of emerging patterns and the random forest provide a higher degree of accuracy. Then, we compared CARER with the static window approach, which used Hidden Markov Model. To have a fair comparison, we replaced the dynamic segmentation module of CARER with the static one. The results showed more than 12% improvement in f-measure. Finally, we compared our work with Dynamic sensor segmentation for real-time activity recognition, which used dynamic segmentation. The f-measure metric demonstrated up to 12.73% improvement.
Microprocessors and Microsystems, 2017
Community detection is a demanded technique in analyzing complex and massive graphbased networks.... more Community detection is a demanded technique in analyzing complex and massive graphbased networks. The quality of the detected communities in an acceptable time is an important aspect of an algorithm, which aims at passing through an ultra large scale graph, for instance a social network graph. In this paper, an efficient method is proposed to tackle Louvain community detection problem on multicore systems in the line of thread-level parallelization. The main contribution of this article is to present an adaptive parallel thread assignment for the calculation of adding qualified neighbor nodes to the community. This leads to obtain a better load balancing method for the execution of threads. The proposed method is evaluated on an AMD system with 64 cores, and can reduce the execution time by 50% in comparison with the previous fastest parallel algorithms. Moreover, it was observed in the course of the experiments that our method could find comparably qualified
Applied Intelligence, 2017
Recommender systems have been one of the most prominent information filtering techniques during t... more Recommender systems have been one of the most prominent information filtering techniques during the past decade. However, they suffer from two major problems, which degrade the accuracy of suggestions: data sparsity and cold start. The popularity of social networks shed light on a new generation of such systems, which is called social recommender system. These systems act promisingly in solving data sparsity and cold start issues. Given that social relationships are not available to every system, the implicit relationship between the items can be an adequate option to replace the constraints. In this paper, we explored the effect of combining the implicit relationships of the items and user-item matrix on the accuracy of recommendations. The new Item Asymmetric Correlation (IAC) method detects the implicit relationship between each pair of items by considering an asymmetric correlation among them. Two dataset types, the output of IAC and user-item matrix, are fused into a collaborative filtering recommender via Matrix Factorization (MF) technique. We apply the two mostly used mapping models in MF, Stochastic Gradient Descent and Alternating Least Square, to investigate their performances in the presence of sparse data. The experimental results of
Journal of Network and Computer Applications, 2016
In the growing trend of globalization, the logistics and transportation play the key role in many... more In the growing trend of globalization, the logistics and transportation play the key role in many industries. Products are shipped from one country to another. The heart of this transportation is the port, where large quantities of goods are imported and exported on daily bases. Applying Internet of Things (IoT) technologies to port warehouses is the main area of this work. Centralized warehouse management systems are prone to a single point of failure problem and are not fault tolerant. They have also limited scalability. Another issue is that the port warehouses may run by different companies and the privacy of the detailed product information must be preserved. In this paper, we design an IoT based architecture for warehouse management, according to the facts gathered from the Khorramshahr port. The main issues that this paper tries to address are: scalability, fault tolerance, and privacy. The devised architecture is named Khorramshahr (named after the name of the port). The tailored version of Chord architecture is exploited for a Distributed Hash Table (DHT), which inherits the required scalability and fault tolerance. According to the devised architecture, each company can solely manage its dedicated nodes and preserve the privacy. To boost the lookup process the design is enhanced by Bloom filter and Quotient filter. Moreover, to gain performance the architecture uses a hybrid approach, which combines both the client server and the peer to peer paradigms. To evaluate the performance, the DHT of the architecture is simulated by OMNet++ and OverSim. The simulation results show that by the scaling the number of terminals from 25 to 250 the access time for an item increases only 38%. Besides, the increase in the number of requests from 10,000 to 50,000 depicts 5% and 10% improvement respectively for the lookup message latency compared to the ODSA. The simulation results also exhibit lower false positive rate for the Quotient filter approach, which makes it the first implementation candidate. Only in cases with strict constraints on memory consumption the Bloom filter approach is favored.
2015 18th CSI International Symposium on Computer Architecture and Digital Systems (CADS), 2015
In recent years, detecting dense sub-graphs that are known as communities in massive graphs has b... more In recent years, detecting dense sub-graphs that are known as communities in massive graphs has been a common issue in different fields of science. It provides the facility of studying complex graphs by simplifying them through utilizing communities. Due to ceaseless increases in graph size that are used in social networks (with billions of nodes and edges), algorithm execution time is an important factor for detecting communities. To cope with this problem, a new parallel community detection algorithm is presented in this paper. The main idea behind the proposed method is to assign parallel threads for the calculation of adding qualified neighbor nodes to the community. Proposed algorithm is tested using a general PC (IntelCorei7, 4 GByte). It leads to abating the algorithm execution time from 25% to 78% compared to the fastest previous parallel algorithms.
Ad Hoc and Sensor Wireless Networks, 2011
The clustering schemes devised so far mostly concentrate on satisfying the classic requirement of... more The clustering schemes devised so far mostly concentrate on satisfying the classic requirement of wireless sensor networks, which is energy awareness, to prolong network lifetime. On the other hand, there is a growing demand on applying heterogeneous nodes in sensor network applications, like distributed composite event detection that introduces new requirements such as maximizing sensor diversity for each cluster. The devised clustering schemes usually ignore this important feature. In this paper we introduce a diversity-based energy aware clustering protocol (DEC) that not only uses the energy level of the nodes in electing cluster heads, but also produces clusters with higher diversity of sensor types in their members. We will use the term complete clusters for the clusters which have the maximum diversity of sensor types. Simulation results support the idea that DEC partitions the network with higher percentage of complete clusters which can be up to 40% more than the generated clusters by ACE and CPCP algorithms in a network with the connectivity degree of 10. Besides, cluster heads in DEC have higher energy level compared to aforementioned clustering schemes.
Uploads
Papers by Hadi Tabatabaee Malazi