Proceedings of the 15th ACM Web Science Conference 2023
Toxicity is endemic to online social networks (OSNs) including Twitter. It follows a Pareto-like ... more Toxicity is endemic to online social networks (OSNs) including Twitter. It follows a Pareto-like distribution where most of the toxicity is generated by a very small number of profiles and as such, analyzing and characterizing these "toxic profiles" is critical. Prior research has largely focused on sporadic, event-centric toxic content (i.e., tweets) to characterize toxicity on the platform. Instead, we approach the problem of characterizing toxic content from a profilecentric point of view. We study 143K Twitter profiles and focus on the behavior of the top 1% producers of toxic content on Twitter, based on toxicity scores of their tweets availed by Perspective API. With a total of 293M tweets, spanning 16 years of activity, the longitudinal data allows us to reconstruct the timelines of all profiles involved. We use these timelines to gauge the behavior of the most toxic Twitter profiles compared to the rest of the Twitter population. We study the pattern of tweet posting from highly toxic accounts, based on the frequency and how prolific they are, the nature of hashtags and URLs, profile metadata, and Botometer scores. We find that the highly toxic profiles post coherent and wellarticulated content, their tweets keep to a narrow theme with lower diversity in hashtags, URLs, and domains, they are thematically similar to each other, and have a high likelihood of bot-like behavior, likely to have progenitors with intentions to influence, based on high fake followers score. Our work contributes insight into the top 1% toxic profiles on Twitter and establishes the profile-centric approach to investigate toxicity on Twitter to be beneficial. The identification of the most toxic profiles can aid in the reporting and suspension of such profiles, making Twitter a better place for discussions. Finally, we contribute to the research community with this large-scale and longitudinal dataset 1 , annotated with six types of toxic scores.
This research develops a methodology to identify transactions through data-driven tracking and an... more This research develops a methodology to identify transactions through data-driven tracking and analysis of ransomware-Bitcoin payment networks [30]. We demonstrate the methodology by applying the GraphSAGE embedding algorithm to the WannaCry ransomware-Bitcoin cash-out network. The paper takes a data-driven approach to building a machine learning system that allows analysts to define features relevant to ransomware-Bitcoin payment networks.
This paper performs a large-scale study of dependency chains in the web, to find that around 50% ... more This paper performs a large-scale study of dependency chains in the web, to find that around 50% of first-party websites render content that they did not directly load. Although the majority (84.91%) of websites have short dependency chains (below 3 levels), we find websites with dependency chains exceeding 30. Using VirusTotal, we show that 1.2% of these third-parties are classified as suspicious-although seemingly small, this limited set of suspicious third-parties have remarkable reach into the wider ecosystem. We find that 73% of websites understudy load resources from suspicious third-parties, and 24.8% of first-party webpages contain at least three third-parties classified as suspicious in their dependency chain. By running sandboxed experiments, we observe a range of activities with the majority of suspicious JavaScript codes downloading malware.
NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium
Micro-segmentation is an emerging security technique that separates physical networks into isolat... more Micro-segmentation is an emerging security technique that separates physical networks into isolated logical microsegments (workloads). By tying fine-grained security policies to individual workloads, it limits the attacker's ability to move laterally through the network, even after infiltrating the perimeter defences. While micro-segmentation is proved to be effective for shrinking enterprise networks attack surface, its impact assessment is almost absent in the literature. This research is dedicated to developing an analytical framework to characterise and quantify the effectiveness of micro-segmentation on enhancing networks security. We rely on a twofold graph-feature based framework of the network connectivity and attack graphs to evaluate the network exposure and robustness, respectively. While the former assesses the network assets connectedness, reachability and centrality, the latter depicts the ability of the network to resist goal-oriented attackers. Tracking the variations of formulated metrics values post the deployment of micro-segmentation reveals exposure reduction and robustness improvement in the range of 60%-90%.
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2022
Contemporary mobile applications (apps) are designed to track, use, and share users' data, often ... more Contemporary mobile applications (apps) are designed to track, use, and share users' data, often without their consent, which results in potential privacy and transparency issues. To investigate whether mobile apps have always been (non-)transparent regarding how they collect information about users, we perform a longitudinal analysis of the historical versions of 268 Android apps. These apps comprise 5,240 app releases or versions between 2008 and 2016. We detect inconsistencies between apps' behaviors and the stated use of data collection in privacy policies to reveal compliance issues. We utilize machine learning techniques for the classification of the privacy policy text to identify the purported practices that collect and/or share users' personal information, such as phone numbers and email addresses. We then uncover the data leaks of an app through static and dynamic analysis. Over time, our results show a steady increase in the number of apps' data collection practices that are undisclosed in the privacy policies. This behavior is particularly troubling since privacy policy is the primary tool for describing the app's privacy protection practices. We find that newer versions of the apps are likely to be more non-compliant than their preceding versions. The discrepancies between the purported and the actual data practices show that privacy policies are often incoherent with the apps' behaviors, thus defying the 'notice and choice' principle when users install apps.
2019 IEEE 44th LCN Symposium on Emerging Topics in Networking (LCN Symposium), 2019
Websites employ third-party ads and tracking services leveraging cookies and JavaScript code, to ... more Websites employ third-party ads and tracking services leveraging cookies and JavaScript code, to deliver ads and track users' behavior, causing privacy concerns. To limit online tracking and block advertisements, several ad-blocking (black) lists have been curated consisting of URLs and domains of well-known ads and tracking services. Using Internet Archive's Wayback Machine in this paper, we collect a retrospective view of the Web to analyze the evolution of ads and tracking services and evaluate the effectiveness of ad-blocking blacklists. We propose metrics to capture the efficacy of ad-blocking blacklists to investigate whether these blacklists have been reactive or proactive in tackling the online ad and tracking services. We introduce a stability metric to measure the temporal changes in ads and tracking domains blocked by ad-blocking blacklists and a diversity metric to measure the ratio of new ads and tracking domains detected. We observe that ads and tracking domains in websites change over time, and among the ad-blocking blacklists that we investigated, our analysis reveals that some blacklists were more informed with the existence of ads and tracking domains, but their rate of change was slower than other blacklists. Our analysis also shows that Alexa top 5K websites in the US, Canada, and the UK have the most number of ads and tracking domains per website, and have the highest proactive scores. This suggests that ad-blocking blacklists are updated by prioritizing ads and tracking domains reported in the popular websites from these countries.
Objectives To investigate whether and what user data are collected by health related mobile appli... more Objectives To investigate whether and what user data are collected by health related mobile applications (mHealth apps), to characterise the privacy conduct of all the available mHealth apps on Google Play, and to gauge the associated risks to privacy. Design Cross sectional study Setting Health related apps developed for the Android mobile platform, available in the Google Play store in Australia and belonging to the medical and health and fitness categories. Participants Users of 20 991 mHealth apps (8074 medical and 12 917 health and fitness found in the Google Play store: in-depth analysis was done on 15 838 apps that did not require a download or subscription fee compared with 8468 baseline non-mHealth apps. Main outcome measures Primary outcomes were characterisation of the data collection operations in the apps code and of the data transmissions in the apps traffic; analysis of the primary recipients for each type of user data; presence of adverts and trackers in the app traf...
Journal of the American Medical Informatics Association, 2021
Objective We conduct a first large-scale analysis of mobile health (mHealth) apps available on Go... more Objective We conduct a first large-scale analysis of mobile health (mHealth) apps available on Google Play with the goal of providing a comprehensive view of mHealth apps’ security features and gauging the associated risks for mHealth users and their data. Materials and Methods We designed an app collection platform that discovered and downloaded more than 20 000 mHealth apps from the Medical and Health & Fitness categories on Google Play. We performed a suite of app code and traffic measurements to highlight a range of app security flaws: certificate security, sensitive or unnecessary permission requests, malware presence, communication security, and security-related concerns raised in user reviews. Results Compared to baseline non-mHealth apps, mHealth apps generally adopt more reliable signing mechanisms and request fewer dangerous permissions. However, significant fractions of mHealth apps expose users to serious security risks. Specifically, 1.8% of mHealth apps package suspici...
The Web is a tangled mass of interconnected services, where websites import a range of external r... more The Web is a tangled mass of interconnected services, where websites import a range of external resources from various third-party domains. However, the latter can further load resources hosted on other domains. For each website, this creates a dependency chain underpinned by a form of implicit trust between the first-party and transitively connected third-parties. The chain can only be loosely controlled as first-party websites often have little, if any, visibility of where these resources are loaded from. This paper performs a large-scale study of dependency chains in the Web, to find that around 50% of first-party websites render content that they did not directly load. Although the majority (84.91%) of websites have short dependency chains (below 3 levels), we find websites with dependency chains exceeding 30. Using VirusTotal, we show that 1.2% of these third-parties are classified as suspicious-although seemingly small, this limited set of suspicious third-parties have remarkable reach into the wider ecosystem. By running sandboxed experiments, we observe a range of activities with the majority of suspicious JavaScript downloading malware; worryingly, we find this propensity is greater among implicitly trusted JavaScripts.
The web is a tangled mass of interconnected services, whereby websites import a range of external... more The web is a tangled mass of interconnected services, whereby websites import a range of external resources from various third-party domains. The latter can also load further resources hosted on other domains. For each website, this creates a dependency chain underpinned by a form of implicit trust between the first-party and transitively connected third parties. The chain can only be loosely controlled as first-party websites often have little, if any, visibility on where these resources are loaded from. This article performs a large-scale study of dependency chains in the web to find that around 50% of first-party websites render content that they do not directly load. Although the majority (84.91%) of websites have short dependency chains (below three levels), we find websites with dependency chains exceeding 30. Using VirusTotal, we show that 1.2% of these third parties are classified as suspicious—although seemingly small, this limited set of suspicious third parties have remar...
Corporate sustainability is considered a fundamental paradigm and solution in creating a prospero... more Corporate sustainability is considered a fundamental paradigm and solution in creating a prosperous future for organizations. However, social sustainability issues and pandemic problems from COVID-19 have affected corporations and interrupted plans for sustainable development. To date, corporate sustainability frameworks have taken a relatively narrow view of this paradigm. This study highlights serious challenges to corporate sustainability while providing a framework in an attempt to enable more sustainable business practices. To fill the gap in the literature, we have developed a framework to organize and prioritize important sustainability indicators. The first phase of the study involves the classification of 45 sub-criteria of corporate sustainability under nine main categories by using a literature review and novel Fuzzy Delphi method. The resulting categories are Corporate Governance, Product Responsibility, Transparency and Communication, Economic, Environmental, Social, Na...
Background Atherosclerotic cardiovascular disease (ASCVD) is driven by multifaceted contributions... more Background Atherosclerotic cardiovascular disease (ASCVD) is driven by multifaceted contributions of the immune system. However, the dysregulation of immune cells that leads to ASCVD is poorly understood. We determined the association of components of innate and adaptive immunity longitudinally with ASCVD, and assessed whether arterial calcifications play a role in this association. Methods and findings Granulocyte (innate immunity) and lymphocyte (adaptive immunity) counts were determined 3 times (2002-2008, mean age 65.2 years; 2009-2013, mean age 69.0 years; and 2014-2015, mean age 78.5 years) in participants of the population-based Rotterdam Study without ASCVD at baseline. Participants were followed-up for ASCVD or death until 1 January 2015. A random sample of 2,366 underwent computed tomography at baseline to quantify arterial calcification volume in 4 vessel beds. We studied the association between immunity components with risk of ASCVD and assessed whether immunity components were related to arterial calcifications at baseline. Of 7,730 participants (59.4% women), 801 developed ASCVD during a median follow-up of 8.1 years. Having an increased granulocyte count increased ASCVD risk (adjusted hazard ratio for doubled granulocyte count [95% CI] = 1.78 [1.34-2.37], P < 0.001). Higher granulocyte counts were related to larger calcification volumes in all vessels, most prominently in the coronary arteries (mean difference in calcium volume [mm 3 ] per SD increase in granulocyte count [95% CI] = 32.3 [9.9-54.7], P < 0.001). Respectively, the association between granulocyte count and incident coronary heart disease and stroke was partly mediated by coronary artery calcification (overall proportion
In this work, a switched-beam 2-element multiple-input multiple-output (MIMO) antenna system is p... more In this work, a switched-beam 2-element multiple-input multiple-output (MIMO) antenna system is proposed at mm-wave bands for 5G applications. The antenna system consists of a 4×4 connected slot antennas for each MIMO element forming the connected antenna array (CAA). A feed network based on a Butler matrix is used to excite the CAA, in addition to steer the beam at different locations which enhance the diversity performances. The mm-wave MIMO antenna syytem operates at 28 GHz with at least-10dB measured bandwidth of 830 MHz (27.4-28.23 GHz). It is fabricated on a commercially available RO3003 substrate with dielectric constant of 3.3 and height of 0.13mm, respectively. The dimensions of the board are equal to 150×100×0.13 mm 3. The proposed design is compact, low profile and suitable for future 5G enabled tablet PCs.
Background The role of subtle disturbances of brain perfusion in the risk of transient ischemic a... more Background The role of subtle disturbances of brain perfusion in the risk of transient ischemic attack ( TIA) or ischemic stroke remains unknown. We examined the association between global brain perfusion and risk of TIA and ischemic stroke in the general population. Methods and Results Between 2005 and 2015, 5289 stroke‐free participants (mean age, 64.3 years; 55.6% women) from the Rotterdam Study underwent phase‐contrast brain magnetic resonance imaging at baseline to assess global brain perfusion. These participants were followed for incident TIA or ischemic stroke until January 1, 2016. We investigated associations between global brain perfusion (mL of blood flow/100 mL of brain/min) and risk of TIA and ischemic stroke using Cox regression models with adjustment for age, sex, and cardiovascular risk factors. Additionally, we investigated whether associations were modified by retinal vessel calibers, small and large vessel disease, blood pressure, and heart rate. During a median ...
Retinal structures may serve as a biomarker for dementia, but longitudinal studies examining this... more Retinal structures may serve as a biomarker for dementia, but longitudinal studies examining this link are lacking. To investigate the association of inner retinal layer thickness with prevalent and incident dementia in a general population of Dutch adults. From September 2007 to June 2012, participants from the prospective population-based Rotterdam Study who were 45 years and older and had gradable retinal optical coherence tomography images and at baseline were free from stroke, Parkinson disease, multiple sclerosis, glaucoma, macular degeneration, retinopathy, myopia, hyperopia, and optic disc pathology were included. They were followed up until January 1, 2015, for the onset of dementia. Inner retinal layer thicknesses (ie, retinal nerve fiber layer [RNFL]) and ganglion cell-inner plexiform layer (GC-IPL) thicknesses measured on optical coherence tomography images. Odds ratios and hazard ratios for incident dementia per SD decrease in retinal layer thickness adjusted for age, s...
2008 IEEE International Performance, Computing and Communications Conference, 2008
Abstract Advances in wireless communication and embedded electronic systems have revolutionized e... more Abstract Advances in wireless communication and embedded electronic systems have revolutionized everyday life via inter-networking sophisticated tiny and useful devices. Mobile RFID (mRFID) systems make it possible to get information about entities through ...
2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012
We explore interesting connections between blind source separation (BSS) and acoustic echo cancel... more We explore interesting connections between blind source separation (BSS) and acoustic echo cancellation (AEC), and develop a framework where the AEC problem is transformed and solved as a BSS problem. We show that by careful selection of the BSS algorithm the double-talk (DT) problem in AEC is solved without the need to use a DT detector or a step-size controller. Furthermore, the echo cancellation performance is maintained even during single-talk when only the far-end speaker is active. The algorithm converges to the true echo path much faster than the normalized least-mean squares adaptation. Moreover, the proposed algorithm does not require a knowledge of the echo-tail length and is robust against under estimation of the echo-filter length. The simple implementation and fast convergence of the proposed method makes it a suitable candidate for implementation on low-power general purpose DSPs.
Proceedings of the 2014 CoNEXT on Student Workshop, 2014
Various web components and JavaScripts have been used for collecting personal identifiable inform... more Various web components and JavaScripts have been used for collecting personal identifiable information resulting in privacy concerns. Although several privacy preserving tools have been proposed to limit online advertising and tracking their use has been limited and mostly limited to techsavvy audience. In addition to poor and manual filtering-list maintenance and confusing settings, these privacy preserving tools have, arguably, usability and intrusiveness issues. Among others, their brute-force blockage of all JavaScripts on a website, may result in broken functionalities thus effecting user's web-experience. In this work, we propose a framework to quantify the intrusiveness of JavaScripts with ultimate objective of measuring the usability of privacy preserving tools. We postulate that intrusive JavaScripts carry distinct characteristics that could be used to differentiate them from functional JavaScripts i.e., scripts that are genuinely used for enhancing the user's web experience. We propose a measurement methodology that can automatically separate tracking and privacy intrusive JavaScripts from the functional JavaScripts. Our methodology assumes only partial knowledge of the privacy intrusive JavaScripts.
Pollution of the urban environment is fast becoming a grave threat to urban dwellers as levels of... more Pollution of the urban environment is fast becoming a grave threat to urban dwellers as levels of toxicity increase to beyond safe limits. This is especially true in many low- and middle-income nations where the rapid pace of industrialization and development, coupled with fast growing and extremely dense urban centres, are leading to more serious environmental hazards for citizens. These include urban air pollution, which is difficult to tackle, especially in cities or nations where resources are scarce, awareness is minimal and the urban environment is not an issue receiving government attention. This paper describes the development and pilot implementation of the Volunteer Internet-based Environment Watch (VIEW) in two cities in Pakistan. This system, which makes use of volunteers and their personal computers to monitor air pollution, can send valuable local environmental data to a central server for storage and collation. The system was successfully developed and deployed in a r...
Proceedings of the 15th ACM Web Science Conference 2023
Toxicity is endemic to online social networks (OSNs) including Twitter. It follows a Pareto-like ... more Toxicity is endemic to online social networks (OSNs) including Twitter. It follows a Pareto-like distribution where most of the toxicity is generated by a very small number of profiles and as such, analyzing and characterizing these "toxic profiles" is critical. Prior research has largely focused on sporadic, event-centric toxic content (i.e., tweets) to characterize toxicity on the platform. Instead, we approach the problem of characterizing toxic content from a profilecentric point of view. We study 143K Twitter profiles and focus on the behavior of the top 1% producers of toxic content on Twitter, based on toxicity scores of their tweets availed by Perspective API. With a total of 293M tweets, spanning 16 years of activity, the longitudinal data allows us to reconstruct the timelines of all profiles involved. We use these timelines to gauge the behavior of the most toxic Twitter profiles compared to the rest of the Twitter population. We study the pattern of tweet posting from highly toxic accounts, based on the frequency and how prolific they are, the nature of hashtags and URLs, profile metadata, and Botometer scores. We find that the highly toxic profiles post coherent and wellarticulated content, their tweets keep to a narrow theme with lower diversity in hashtags, URLs, and domains, they are thematically similar to each other, and have a high likelihood of bot-like behavior, likely to have progenitors with intentions to influence, based on high fake followers score. Our work contributes insight into the top 1% toxic profiles on Twitter and establishes the profile-centric approach to investigate toxicity on Twitter to be beneficial. The identification of the most toxic profiles can aid in the reporting and suspension of such profiles, making Twitter a better place for discussions. Finally, we contribute to the research community with this large-scale and longitudinal dataset 1 , annotated with six types of toxic scores.
This research develops a methodology to identify transactions through data-driven tracking and an... more This research develops a methodology to identify transactions through data-driven tracking and analysis of ransomware-Bitcoin payment networks [30]. We demonstrate the methodology by applying the GraphSAGE embedding algorithm to the WannaCry ransomware-Bitcoin cash-out network. The paper takes a data-driven approach to building a machine learning system that allows analysts to define features relevant to ransomware-Bitcoin payment networks.
This paper performs a large-scale study of dependency chains in the web, to find that around 50% ... more This paper performs a large-scale study of dependency chains in the web, to find that around 50% of first-party websites render content that they did not directly load. Although the majority (84.91%) of websites have short dependency chains (below 3 levels), we find websites with dependency chains exceeding 30. Using VirusTotal, we show that 1.2% of these third-parties are classified as suspicious-although seemingly small, this limited set of suspicious third-parties have remarkable reach into the wider ecosystem. We find that 73% of websites understudy load resources from suspicious third-parties, and 24.8% of first-party webpages contain at least three third-parties classified as suspicious in their dependency chain. By running sandboxed experiments, we observe a range of activities with the majority of suspicious JavaScript codes downloading malware.
NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium
Micro-segmentation is an emerging security technique that separates physical networks into isolat... more Micro-segmentation is an emerging security technique that separates physical networks into isolated logical microsegments (workloads). By tying fine-grained security policies to individual workloads, it limits the attacker's ability to move laterally through the network, even after infiltrating the perimeter defences. While micro-segmentation is proved to be effective for shrinking enterprise networks attack surface, its impact assessment is almost absent in the literature. This research is dedicated to developing an analytical framework to characterise and quantify the effectiveness of micro-segmentation on enhancing networks security. We rely on a twofold graph-feature based framework of the network connectivity and attack graphs to evaluate the network exposure and robustness, respectively. While the former assesses the network assets connectedness, reachability and centrality, the latter depicts the ability of the network to resist goal-oriented attackers. Tracking the variations of formulated metrics values post the deployment of micro-segmentation reveals exposure reduction and robustness improvement in the range of 60%-90%.
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2022
Contemporary mobile applications (apps) are designed to track, use, and share users' data, often ... more Contemporary mobile applications (apps) are designed to track, use, and share users' data, often without their consent, which results in potential privacy and transparency issues. To investigate whether mobile apps have always been (non-)transparent regarding how they collect information about users, we perform a longitudinal analysis of the historical versions of 268 Android apps. These apps comprise 5,240 app releases or versions between 2008 and 2016. We detect inconsistencies between apps' behaviors and the stated use of data collection in privacy policies to reveal compliance issues. We utilize machine learning techniques for the classification of the privacy policy text to identify the purported practices that collect and/or share users' personal information, such as phone numbers and email addresses. We then uncover the data leaks of an app through static and dynamic analysis. Over time, our results show a steady increase in the number of apps' data collection practices that are undisclosed in the privacy policies. This behavior is particularly troubling since privacy policy is the primary tool for describing the app's privacy protection practices. We find that newer versions of the apps are likely to be more non-compliant than their preceding versions. The discrepancies between the purported and the actual data practices show that privacy policies are often incoherent with the apps' behaviors, thus defying the 'notice and choice' principle when users install apps.
2019 IEEE 44th LCN Symposium on Emerging Topics in Networking (LCN Symposium), 2019
Websites employ third-party ads and tracking services leveraging cookies and JavaScript code, to ... more Websites employ third-party ads and tracking services leveraging cookies and JavaScript code, to deliver ads and track users' behavior, causing privacy concerns. To limit online tracking and block advertisements, several ad-blocking (black) lists have been curated consisting of URLs and domains of well-known ads and tracking services. Using Internet Archive's Wayback Machine in this paper, we collect a retrospective view of the Web to analyze the evolution of ads and tracking services and evaluate the effectiveness of ad-blocking blacklists. We propose metrics to capture the efficacy of ad-blocking blacklists to investigate whether these blacklists have been reactive or proactive in tackling the online ad and tracking services. We introduce a stability metric to measure the temporal changes in ads and tracking domains blocked by ad-blocking blacklists and a diversity metric to measure the ratio of new ads and tracking domains detected. We observe that ads and tracking domains in websites change over time, and among the ad-blocking blacklists that we investigated, our analysis reveals that some blacklists were more informed with the existence of ads and tracking domains, but their rate of change was slower than other blacklists. Our analysis also shows that Alexa top 5K websites in the US, Canada, and the UK have the most number of ads and tracking domains per website, and have the highest proactive scores. This suggests that ad-blocking blacklists are updated by prioritizing ads and tracking domains reported in the popular websites from these countries.
Objectives To investigate whether and what user data are collected by health related mobile appli... more Objectives To investigate whether and what user data are collected by health related mobile applications (mHealth apps), to characterise the privacy conduct of all the available mHealth apps on Google Play, and to gauge the associated risks to privacy. Design Cross sectional study Setting Health related apps developed for the Android mobile platform, available in the Google Play store in Australia and belonging to the medical and health and fitness categories. Participants Users of 20 991 mHealth apps (8074 medical and 12 917 health and fitness found in the Google Play store: in-depth analysis was done on 15 838 apps that did not require a download or subscription fee compared with 8468 baseline non-mHealth apps. Main outcome measures Primary outcomes were characterisation of the data collection operations in the apps code and of the data transmissions in the apps traffic; analysis of the primary recipients for each type of user data; presence of adverts and trackers in the app traf...
Journal of the American Medical Informatics Association, 2021
Objective We conduct a first large-scale analysis of mobile health (mHealth) apps available on Go... more Objective We conduct a first large-scale analysis of mobile health (mHealth) apps available on Google Play with the goal of providing a comprehensive view of mHealth apps’ security features and gauging the associated risks for mHealth users and their data. Materials and Methods We designed an app collection platform that discovered and downloaded more than 20 000 mHealth apps from the Medical and Health & Fitness categories on Google Play. We performed a suite of app code and traffic measurements to highlight a range of app security flaws: certificate security, sensitive or unnecessary permission requests, malware presence, communication security, and security-related concerns raised in user reviews. Results Compared to baseline non-mHealth apps, mHealth apps generally adopt more reliable signing mechanisms and request fewer dangerous permissions. However, significant fractions of mHealth apps expose users to serious security risks. Specifically, 1.8% of mHealth apps package suspici...
The Web is a tangled mass of interconnected services, where websites import a range of external r... more The Web is a tangled mass of interconnected services, where websites import a range of external resources from various third-party domains. However, the latter can further load resources hosted on other domains. For each website, this creates a dependency chain underpinned by a form of implicit trust between the first-party and transitively connected third-parties. The chain can only be loosely controlled as first-party websites often have little, if any, visibility of where these resources are loaded from. This paper performs a large-scale study of dependency chains in the Web, to find that around 50% of first-party websites render content that they did not directly load. Although the majority (84.91%) of websites have short dependency chains (below 3 levels), we find websites with dependency chains exceeding 30. Using VirusTotal, we show that 1.2% of these third-parties are classified as suspicious-although seemingly small, this limited set of suspicious third-parties have remarkable reach into the wider ecosystem. By running sandboxed experiments, we observe a range of activities with the majority of suspicious JavaScript downloading malware; worryingly, we find this propensity is greater among implicitly trusted JavaScripts.
The web is a tangled mass of interconnected services, whereby websites import a range of external... more The web is a tangled mass of interconnected services, whereby websites import a range of external resources from various third-party domains. The latter can also load further resources hosted on other domains. For each website, this creates a dependency chain underpinned by a form of implicit trust between the first-party and transitively connected third parties. The chain can only be loosely controlled as first-party websites often have little, if any, visibility on where these resources are loaded from. This article performs a large-scale study of dependency chains in the web to find that around 50% of first-party websites render content that they do not directly load. Although the majority (84.91%) of websites have short dependency chains (below three levels), we find websites with dependency chains exceeding 30. Using VirusTotal, we show that 1.2% of these third parties are classified as suspicious—although seemingly small, this limited set of suspicious third parties have remar...
Corporate sustainability is considered a fundamental paradigm and solution in creating a prospero... more Corporate sustainability is considered a fundamental paradigm and solution in creating a prosperous future for organizations. However, social sustainability issues and pandemic problems from COVID-19 have affected corporations and interrupted plans for sustainable development. To date, corporate sustainability frameworks have taken a relatively narrow view of this paradigm. This study highlights serious challenges to corporate sustainability while providing a framework in an attempt to enable more sustainable business practices. To fill the gap in the literature, we have developed a framework to organize and prioritize important sustainability indicators. The first phase of the study involves the classification of 45 sub-criteria of corporate sustainability under nine main categories by using a literature review and novel Fuzzy Delphi method. The resulting categories are Corporate Governance, Product Responsibility, Transparency and Communication, Economic, Environmental, Social, Na...
Background Atherosclerotic cardiovascular disease (ASCVD) is driven by multifaceted contributions... more Background Atherosclerotic cardiovascular disease (ASCVD) is driven by multifaceted contributions of the immune system. However, the dysregulation of immune cells that leads to ASCVD is poorly understood. We determined the association of components of innate and adaptive immunity longitudinally with ASCVD, and assessed whether arterial calcifications play a role in this association. Methods and findings Granulocyte (innate immunity) and lymphocyte (adaptive immunity) counts were determined 3 times (2002-2008, mean age 65.2 years; 2009-2013, mean age 69.0 years; and 2014-2015, mean age 78.5 years) in participants of the population-based Rotterdam Study without ASCVD at baseline. Participants were followed-up for ASCVD or death until 1 January 2015. A random sample of 2,366 underwent computed tomography at baseline to quantify arterial calcification volume in 4 vessel beds. We studied the association between immunity components with risk of ASCVD and assessed whether immunity components were related to arterial calcifications at baseline. Of 7,730 participants (59.4% women), 801 developed ASCVD during a median follow-up of 8.1 years. Having an increased granulocyte count increased ASCVD risk (adjusted hazard ratio for doubled granulocyte count [95% CI] = 1.78 [1.34-2.37], P < 0.001). Higher granulocyte counts were related to larger calcification volumes in all vessels, most prominently in the coronary arteries (mean difference in calcium volume [mm 3 ] per SD increase in granulocyte count [95% CI] = 32.3 [9.9-54.7], P < 0.001). Respectively, the association between granulocyte count and incident coronary heart disease and stroke was partly mediated by coronary artery calcification (overall proportion
In this work, a switched-beam 2-element multiple-input multiple-output (MIMO) antenna system is p... more In this work, a switched-beam 2-element multiple-input multiple-output (MIMO) antenna system is proposed at mm-wave bands for 5G applications. The antenna system consists of a 4×4 connected slot antennas for each MIMO element forming the connected antenna array (CAA). A feed network based on a Butler matrix is used to excite the CAA, in addition to steer the beam at different locations which enhance the diversity performances. The mm-wave MIMO antenna syytem operates at 28 GHz with at least-10dB measured bandwidth of 830 MHz (27.4-28.23 GHz). It is fabricated on a commercially available RO3003 substrate with dielectric constant of 3.3 and height of 0.13mm, respectively. The dimensions of the board are equal to 150×100×0.13 mm 3. The proposed design is compact, low profile and suitable for future 5G enabled tablet PCs.
Background The role of subtle disturbances of brain perfusion in the risk of transient ischemic a... more Background The role of subtle disturbances of brain perfusion in the risk of transient ischemic attack ( TIA) or ischemic stroke remains unknown. We examined the association between global brain perfusion and risk of TIA and ischemic stroke in the general population. Methods and Results Between 2005 and 2015, 5289 stroke‐free participants (mean age, 64.3 years; 55.6% women) from the Rotterdam Study underwent phase‐contrast brain magnetic resonance imaging at baseline to assess global brain perfusion. These participants were followed for incident TIA or ischemic stroke until January 1, 2016. We investigated associations between global brain perfusion (mL of blood flow/100 mL of brain/min) and risk of TIA and ischemic stroke using Cox regression models with adjustment for age, sex, and cardiovascular risk factors. Additionally, we investigated whether associations were modified by retinal vessel calibers, small and large vessel disease, blood pressure, and heart rate. During a median ...
Retinal structures may serve as a biomarker for dementia, but longitudinal studies examining this... more Retinal structures may serve as a biomarker for dementia, but longitudinal studies examining this link are lacking. To investigate the association of inner retinal layer thickness with prevalent and incident dementia in a general population of Dutch adults. From September 2007 to June 2012, participants from the prospective population-based Rotterdam Study who were 45 years and older and had gradable retinal optical coherence tomography images and at baseline were free from stroke, Parkinson disease, multiple sclerosis, glaucoma, macular degeneration, retinopathy, myopia, hyperopia, and optic disc pathology were included. They were followed up until January 1, 2015, for the onset of dementia. Inner retinal layer thicknesses (ie, retinal nerve fiber layer [RNFL]) and ganglion cell-inner plexiform layer (GC-IPL) thicknesses measured on optical coherence tomography images. Odds ratios and hazard ratios for incident dementia per SD decrease in retinal layer thickness adjusted for age, s...
2008 IEEE International Performance, Computing and Communications Conference, 2008
Abstract Advances in wireless communication and embedded electronic systems have revolutionized e... more Abstract Advances in wireless communication and embedded electronic systems have revolutionized everyday life via inter-networking sophisticated tiny and useful devices. Mobile RFID (mRFID) systems make it possible to get information about entities through ...
2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012
We explore interesting connections between blind source separation (BSS) and acoustic echo cancel... more We explore interesting connections between blind source separation (BSS) and acoustic echo cancellation (AEC), and develop a framework where the AEC problem is transformed and solved as a BSS problem. We show that by careful selection of the BSS algorithm the double-talk (DT) problem in AEC is solved without the need to use a DT detector or a step-size controller. Furthermore, the echo cancellation performance is maintained even during single-talk when only the far-end speaker is active. The algorithm converges to the true echo path much faster than the normalized least-mean squares adaptation. Moreover, the proposed algorithm does not require a knowledge of the echo-tail length and is robust against under estimation of the echo-filter length. The simple implementation and fast convergence of the proposed method makes it a suitable candidate for implementation on low-power general purpose DSPs.
Proceedings of the 2014 CoNEXT on Student Workshop, 2014
Various web components and JavaScripts have been used for collecting personal identifiable inform... more Various web components and JavaScripts have been used for collecting personal identifiable information resulting in privacy concerns. Although several privacy preserving tools have been proposed to limit online advertising and tracking their use has been limited and mostly limited to techsavvy audience. In addition to poor and manual filtering-list maintenance and confusing settings, these privacy preserving tools have, arguably, usability and intrusiveness issues. Among others, their brute-force blockage of all JavaScripts on a website, may result in broken functionalities thus effecting user's web-experience. In this work, we propose a framework to quantify the intrusiveness of JavaScripts with ultimate objective of measuring the usability of privacy preserving tools. We postulate that intrusive JavaScripts carry distinct characteristics that could be used to differentiate them from functional JavaScripts i.e., scripts that are genuinely used for enhancing the user's web experience. We propose a measurement methodology that can automatically separate tracking and privacy intrusive JavaScripts from the functional JavaScripts. Our methodology assumes only partial knowledge of the privacy intrusive JavaScripts.
Pollution of the urban environment is fast becoming a grave threat to urban dwellers as levels of... more Pollution of the urban environment is fast becoming a grave threat to urban dwellers as levels of toxicity increase to beyond safe limits. This is especially true in many low- and middle-income nations where the rapid pace of industrialization and development, coupled with fast growing and extremely dense urban centres, are leading to more serious environmental hazards for citizens. These include urban air pollution, which is difficult to tackle, especially in cities or nations where resources are scarce, awareness is minimal and the urban environment is not an issue receiving government attention. This paper describes the development and pilot implementation of the Volunteer Internet-based Environment Watch (VIEW) in two cities in Pakistan. This system, which makes use of volunteers and their personal computers to monitor air pollution, can send valuable local environmental data to a central server for storage and collation. The system was successfully developed and deployed in a r...
Uploads
Papers by Muhammad Ikram