Wireless Algorithms, Systems, and Applications, 2018
Trading data between user and service provider is a promising and efficient method to promote inf... more Trading data between user and service provider is a promising and efficient method to promote information exchange, service quality improvement, and development of emerging applications, benefiting individual and society. Meanwhile, data resale (i.e., data secondary use) is one of the most critical privacy issues hindering the ongoing process of data trading, which, unfortunately, is ignored in many of the existing privacy-preserving schemes. In this paper, we tackle the issue of data resale from a special angle, i.e., promoting cooperation between user and service provider to prevent data secondary use. For this purpose, we design a novel game-theoretical algorithm, in which user can unilaterally persuade service provider to cooperate in data trading, achieving a “win-win” situation. Besides, we validate our proposed algorithm performance through in-depth theoretical analysis and comprehensive simulations.
Online Social Networks (OSNs) have transformed the way that people socialize. However, when OSNs ... more Online Social Networks (OSNs) have transformed the way that people socialize. However, when OSNs bring people convenience, privacy leakages become a growing worldwide problem. Although several anonymization approaches are proposed to protect information of user identities and social relationships, existing de-anonymization techniques have proved that users in the anonymized network can be re-identified by using an external reference social network collected from the same network or other networks with overlapping users. In this paper, we propose a novel social network de-anonymization mechanism to explore the impact of user attributes on the accuracy of de-anonymization. More specifically, we propose an approach to quantify diversities of user attribute values and select valuable attributes to generate the multipartite graph. Next, we partition this graph into communities, and then map users on the community level and the network level respectively. Finally, we employ a real-world dataset collected from Sina Weibo to evaluate our approach, which demonstrates that our mechanism can achieve a better de-anonymization accuracy compared with the most influential de-anonymization method.
2017 IEEE Symposium on Privacy-Aware Computing (PAC), 2017
Due to the increasing concerns of securing private information, context-aware Internet of Things ... more Due to the increasing concerns of securing private information, context-aware Internet of Things (IoT) applications are in dire need of supporting data privacy preservation for users. In the past years, game theory has been widely applied to design secure and privacy-preserving protocols for users to counter various attacks, and most of the existing work is based on a two-player game model, i.e., a user/defender-attacker game. In this paper, we consider a more practical scenario which involves three players: a user, an attacker, and a service provider, and such a complicated system renders any two-player model inapplicable. To capture the complex interactions between the service provider, the user, and the attacker, we propose a hierarchical two-layer three-player game framework. Finally, we carry out a comprehensive numerical study to validate our proposed game framework and theoretical analysis.
For the shortest link scheduling (SLS), i.e., scheduling a given set of links with minimum time s... more For the shortest link scheduling (SLS), i.e., scheduling a given set of links with minimum time slots, we consider the distributed algorithm design by using the locality of the protocol model with high fidelity under the Rayleigh fading. Different from most previous works, focusing on distributed algorithm design under the deterministic SINR model, which ignores the fading effects in signal propagation, we first show that a successful link of protocol model is also feasible under the deterministic SINR model, then it can be scheduled successfully with high probability under the Rayleigh fading, by upper-bounding interference outside interference range of protocol model. Then on the basis of this key conclusion, we design LLS-SLS algorithm to solve SLS within (2e∆ T max + 1)δ log 2 ∆ T max time slots for a constant δ. Specifically, ∆ T max is the number of a sender's neighbors within some certain range, and can be upper-bounded. Next, based on the concept of random contention, we design CLLS algorithm to schedule all links after costing 4(δ + 1)∆ T max ln ∆ T max + 1 time slots. In addition, extensive simulations evaluate the performance of two proposed algorithms.
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2015
This paper proposes MpDroid, an API-level multi-policy access control enforcement based on the 'R... more This paper proposes MpDroid, an API-level multi-policy access control enforcement based on the 'Rule Set Based Access Control' (RSBAC) framework. In the MpDroid, we monitor and manage resources, services and Android inter-component communication (ICC) based on multiple policies mechanism, so as to restrict the applications access to the sensitive APIs and prevent privilege escalation attacks. When installing an application, we build the mapping relationships between sensitive APIs and the application capability. Each rule in the user-defined and context policies is regarded as a limitation of the application capability. Moreover, system policy is used for matching the illegal ICC communications. Experimental results showed that we can realize the API-level access control for Android middleware, and prevent the illegal ICC communication on the Android 4.1.4.
2009 Eighth IEEE/ACIS International Conference on Computer and Information Science, 2009
We consider the problem of indexing high-dimensional data for answering (approximate) similarity-... more We consider the problem of indexing high-dimensional data for answering (approximate) similarity-search queries. Similarity indexes prove to be important in a wide variety of settings. In this paper we have presented an efficient similarity search for REIK network called SSREIK which is a novel framework that dynamically structure, in order to build distributed routing information. It allows the evaluation of
2010 2nd International Conference on Advanced Computer Control, 2010
ABSTRACT Architecture-centric system is s self-adaptive goal's implementation platform, r... more ABSTRACT Architecture-centric system is s self-adaptive goal's implementation platform, requirement is processed as business services, which is the driven power, agent internetware is built as granularity service's carrier, achieves autonomous barycenter model about architecture rules, completes its self-adaptive reasoning, in order to get services' discovery and design. About flexible real-time business sequence's logic, service agent's decision-making cluster is the learning adjustment mechanism, which can finish implementation topology's evolution of architecture-centric service.
To evaluate whether the measurement of maternal serum inhibin A, activin A and placental growth f... more To evaluate whether the measurement of maternal serum inhibin A, activin A and placental growth factor (PlGF) at 12 + 0 to 16 + 0 weeks of gestation alone or in combination with second-trimester uterine artery Doppler pulsatility index (PI) is useful in predicting pre-eclampsia. This was a case-control study of pre-eclampsia. From pregnant women attending their first antenatal examination at 12-16 weeks we collected serum samples and stored them at - 80 °C. All patients also underwent uterine artery Doppler examination to measure the PI at 22-24 weeks' gestation. We retrieved for analysis frozen samples from women who then developed pre-eclampsia, as well as three control samples per woman, matched for gestational age and storage time. Inhibin A, activin A and PlGF were measured using an enzyme-linked immunosorbent assay (ELISA) by an examiner who was blinded to the pregnancy outcome. There were 31 cases with pre-eclampsia and 93 controls. Second-trimester uterine artery PI and marker levels were expressed as multiples of the median (MoM). The uterine artery PI was increased in pregnancies with pre-eclampsia compared with controls (mean ± SD, 1.45 ± 0.31 MoM vs. 1.02 ± 0.25 MoM, P < 0.001), as were the level of inhibin A (mean ± SD, 1.57 ± 0.34 MoM vs. 1.08 ± 0.43 MoM, P < 0.001) and the level of activin A (mean ± SD, 1.68 ± 0.38 MoM vs. 1.06 ± 0.42 MoM, P < 0.001). The level of PlGF was decreased in pre-eclampsia compared with controls (mean ± SD, 0.69 ± 0.23 MoM vs. 1.00 ± 0.26 MoM, P < 0.001). Receiver-operating characteristics curves were analyzed for controls and cases and areas under the curve (AUC) were 0.796 (95% CI, 0.712-0.880, P < 0.001) for inhibin A, 0.823 (95% CI, 0.746-0.899, P < 0.001) for activin A, 0.831 (95% CI, 0.752-0.910, P < 0.001) for PlGF and 0.851 (95% CI, 0.783-0.920, P < 0.001) for uterine artery PI. The combination of activin A, inhibin A and PI using logistic regression analysis yielded an AUC of 0.907 (95% CI, 0.830-0.938, P < 0.001) with a sensitivity of 87% and a specificity of 80%. The combination of activin A, PlGF and PI gave an AUC of 0.925 (95% CI, 0.852-0.978, P < 0.001) with a sensitivity of 90% and a specificity of 80%. Combining all four markers gave an AUC of 0.941 (95% CI, 0.891-0.990, P < 0.001) with a sensitivity of 93% and a specificity of 80%. Early second-trimester serum inhibin A, activin A, PlGF and second-trimester uterine artery Doppler PI may add further information for the prediction of pre-eclampsia. The combination of the three serum markers and uterine artery Doppler PI has the highest prediction value for pre-eclampsia.
Purpose: To investigate the bilateral symmetry of the global corneal topography in normal corneas... more Purpose: To investigate the bilateral symmetry of the global corneal topography in normal corneas with a wide range of curvature, astigmatism and thickness values Design: Cross-Sectional Study Methods: Topography images were recorded for the anterior and posterior surfaces of 342 participants using a Pentacam. Elevation data were fitted to a general quadratic model that considered both translational and rotational displacements. Comparisons between fellow corneas of estimates of corneal shape parameters (elevation, radius in two main directions, R x and R y , and corresponding shape factors, Q x and Q y) and corneal position parameters (translational displacements: x 0 , y 0 and z 0 , and rotational displacements: a, b and c) were statistically analyzed. Results: The general quadratic model provided average RMS of fit errors with the topography data of 1.760.6 mm and 5.761.3 mm in anterior and posterior corneal surfaces. The comparisons showed highly significant bilateral correlations with the differences between fellow corneas in R x , R y , Q x and Q y of anterior and posterior surfaces remaining insignificantly different from zero. Bilateral differences in elevation measurements at randomly-selected points in both corneal central and peripheral areas indicated strong mirror symmetry between fellow corneas. The mean geometric center (x 0 , y 0 , z 0) of both right and left corneas was located on the temporal side and inferior-temporal side of the apex in anterior and posterior topography map, respectively. Rotational displacement angle a along X axis had similar distributions in bilateral corneas, while rotation angle b along Y axis showed both eyes tilting towards the nasal side. Further, rotation angle c along Z axis, which is related to corneal astigmatism, showed clear mirror symmetry. Conclusions: Analysis of corneal topography demonstrated strong and statistically-significant mirror symmetry between bilateral corneas. This characteristic could help in detection of pathological abnormalities, disease diagnosis, measurement validation and surgery planning.
Purpose. To detect possible differences in clinical outcomes between wavefront-guided laser in si... more Purpose. To detect possible differences in clinical outcomes between wavefront-guided laser in situ keratomileusis (LASIK) and wavefront-optimized LASIK for the treatment of myopia. Methods. A comprehensive literature search of Cochrane Library, MEDLINE, and EMBASE to identify relevant trials comparing LASIK with wavefront-guided and wavefront-optimized. A meta-analysis was performed on the results of the reports. Statistical analysis was performed using RevMan 5.0 software. Results. Seven articles describing a total of 930 eyes were identified. There were no statistically significant differences in the final proportion of eyes achieving uncorrected distance visual acuity of 20/20 or better [odds ratio, 1.04; 95% confidence interval (CI), 0.66 to 1.65; p ϭ 0.86], manifest refractive spherical equivalent within Ϯ 0.50 D of the target (odds ratio, 0.96; 95% CI, 0.53 to 1.75; p ϭ 0.90). No patient lost Ն2 lines of distance-corrected visual acuity at posttreatment. The changes in higher order aberrations were not statistically significant different between the two groups with preoperative root-mean-square (RMS) higher order aberrations Ͻ0.3 m (weighted mean difference, 0.01; 95% CI, Ϫ0.02 to 0.04; p ϭ 0.57). However, wavefront-guided had a significant better postoperative aberration profile than wavefront-optimized with preoperative RMS higher order aberrations Ͼ0.3 m (weighted mean difference, Ϫ0.10; 95% CI, Ϫ0.15 to Ϫ0.06; p Ͻ 0.00001). Conclusions. Both wavefront-guided and wavefront-optimized LASIK have shown excellent efficacy, safety, and predictability. The wavefront-guided technology may be a more appropriate choice for patients who have preoperative RMS higher order aberrations Ͼ0.3 m.
The biomechanical changes in rabbit cornea preserved in storage media with different glucose conc... more The biomechanical changes in rabbit cornea preserved in storage media with different glucose concentrations are experimentally assessed. Two groups of eight fresh rabbit corneas were preserved for 10 days in storage medium Optisol-GS with glucose concentrations of 14 and 28 mM, respectively. Eight additional corneas preserved, glucose-free, in the same medium served as the control group. All specimens were tested under inflation conditions up to 45 mmHg posterior pressure, and the pressuredeformation data obtained experimentally were analyzed using shell theory to derive the stressestrain behavior. Comparisons were held between the three specimen groups in order to determine the effect of glucose concentration on corneal biomechanical behavior and thickness. After storage, the mean central corneal thickness in the control, low-glucose and high-glucose groups underwent statistically significant increases of 38.7 AE 11.3%, 45.4 AE 7.6% and 50.6 AE 8.6%, respectively. The corneas also demonstrated consistent stiffness increases with higher glucose concentrations. The tangent modulus values determined at different pressure levels between 10 and 40 mmHg underwent statistically significant increases with glucose level (P < 0.05). Compared to the control group, other specimens had higher tangent modulus by 17e20% on average with low glucose and 30e37% with high-glucose concentration. The results of the study indicate that the influence of the high-glucose level commonly experienced in diabetes on the biomechanical stiffness of the cornea should be considered in clinical management and in understanding corneal ectasia, glaucoma and the response to refractive surgery.
IEEE Transactions on Knowledge and Data Engineering
In this paper, we study a distributed privacy-preserving learning problem in general social netwo... more In this paper, we study a distributed privacy-preserving learning problem in general social networks. Specifically, we consider a very general problem setting where the agents in a given multi-hop social network are required to make sequential decisions to choose among a set of options featured by unknown stochastic quality signals. Each agent is allowed to interact with its peers through multi-hop communications but with its privacy preserved. To serve the above goals, we propose a four-staged distributed social learning algorithm. In a nutshell, our algorithm proceeds iteratively, and in every round, each agent i) randomly perturbs its adoption for privacy-preserving purpose, ii) disseminates the perturbed adoption over the social network in a nearly uniform manner through random walking, iii) selects an option by referring to its peers' perturbed latest adoptions, and iv) decides whether or not to adopt the selected option according to its latest quality signal. By our solid theoretical analysis, we provide answers to two fundamental algorithmic questions about the performance of our four-staged algorithm: on one hand, we illustrate the convergence of our algorithm when there are a sufficient number of agents in the social network, each of which are with incomplete and perturbed knowledge as input; on the other hand, we reveal the quantitative trade-off between the privacy loss and the communication overhead towards the convergence. We also perform extensive simulations to validate our theoretical analysis and to verify the efficacy of our algorithm.
Deep learning is a thriving field currently stuffed with many practical applications and active r... more Deep learning is a thriving field currently stuffed with many practical applications and active research topics. It allows computers to learn from experience and to understand the world in terms of a hierarchy of concepts, with each being defined through its relations to simpler concepts. Relying on the strong capabilities of deep learning, we propose a convolutional generative adversarial network-based (Conv-GAN) framework titled MalFox, targeting adversarial malware example generation against third-party black-box malware detectors. Motivated by the rival game between malware authors and malware detectors, MalFox adopts a confrontational approach to produce perturbation paths, with each formed by up to three methods (namely Obfusmal, Stealmal, and Hollowmal) to generate adversarial malware examples. To demonstrate the effectiveness of MalFox, we collect a large dataset consisting of both malware and benignware programs, and investigate the performance of MalFox in terms of accuracy, detection rate, and evasive rate of the generated adversarial malware examples. Our evaluation indicates that the accuracy can be as high as 99.0% which significantly outperforms the other 12 well-known learning models. Furthermore, the detection rate is dramatically decreased by 56.8% on average, and the average evasive rate is noticeably improved by up to 56.2%.
Social network data is widely shared, forwarded and published to third parties, which led to the ... more Social network data is widely shared, forwarded and published to third parties, which led to the risks of privacy disclosure. Even thought the network provider always perturbs the data before publishing it, attackers can still recover anonymous data according to the collected auxiliary information. In this paper, we transform the problem of de-anonymization into node matching problem in graph, and the de-anonymization method can reduce the number of nodes to be matched at each time. In addition, we use spectrum partitioning method to divide the social graph into disjoint subgraphs, and it can effectively be applied to large-scale social networks and executed in parallel by using multiple processors. Through the analysis of the influence of power-law distribution on de-anonymization, we synthetically consider the structural and personal information of users which made the feature information of the user more practical
2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS)
The extreme boom of online social networks paves the way for rumor propagation, which may incur a... more The extreme boom of online social networks paves the way for rumor propagation, which may incur an economic loss and cause further public panic. Hence, there is a pressing need to develop countermeasures for reducing side effects posed by rumors. Different from the state-of-the-art work that mostly conducted micro-perspective studies, our paper focuses on a macro-perspective one. In detail, our study neglects technical details and analyzes the antagonistic dynamics between the rumormonger and the rumor suppressor, which provides a deep understanding of the overall development trend of rumor propagation. To reveal the potentials of the rumormonger and the rumor suppressor, the competence-oriented analysis is proposed, where the sufficient and necessary conditions for the existence of the Nash equilibrium in this rumor game are proved rigorously, helping us to derive steady ratios of people who trust or deny a rumor. To figure out the optimal strategy to strike back the rumormonger, the target-oriented analysis is conducted, in which the analytical solutions when the strategies of both players are static are solved and an iteration algorithm is employed to obtain the numerical solutions when their strategies are dynamic. Both numerical and real-world-data based simulations are adopted to verify the proposed competence-oriented and target-oriented analyses.
Hybrid beamforming (HBF) is a promising approach to obtain a better balance between hardware comp... more Hybrid beamforming (HBF) is a promising approach to obtain a better balance between hardware complexity and system performance in massive MIMO communication systems. However, the HBF optimization problem is a challenging task due to its nonconvex property in terms of design complexity and spectral efficiency (SE) performance. In this work, a low-complexity convolutional neural network (CNN)-based HBF algorithm is proposed to solve the SE maximization problem under the constant modulus constraint and transmit power constraint in a multiple-input single-output (MISO) system. The proposed CNN framework uses multiple convolutional blocks to extract more channel features. Considering that the solutions for the HBF are hard to obtain, we derive an unsupervised learning mechanism to avoid any labeled data when training the constructed CNN. We discuss the performance of the proposed algorithm in terms of both the generalization ability for multiple CSIs and the specific solving ability for ...
Social network data is widely shared, forwarded and published to third parties, which led to the ... more Social network data is widely shared, forwarded and published to third parties, which led to the risks of privacy disclosure. Even thought the network provider always perturbs the data before publishing it, attackers can still recover anonymous data according to the collected auxiliary information. In this paper, we transform the problem of de-anonymization into node matching problem in graph, and the de-anonymization method can reduce the number of nodes to be matched at each time. In addition, we use spectrum partitioning method to divide the social graph into disjoint subgraphs, and it can effectively be applied to large-scale social networks and executed in parallel by using multiple processors. Through the analysis of the influence of power-law distribution on de-anonymization, we synthetically consider the structural and personal information of users which made the feature information of the user more practical.
Wireless Algorithms, Systems, and Applications, 2018
Trading data between user and service provider is a promising and efficient method to promote inf... more Trading data between user and service provider is a promising and efficient method to promote information exchange, service quality improvement, and development of emerging applications, benefiting individual and society. Meanwhile, data resale (i.e., data secondary use) is one of the most critical privacy issues hindering the ongoing process of data trading, which, unfortunately, is ignored in many of the existing privacy-preserving schemes. In this paper, we tackle the issue of data resale from a special angle, i.e., promoting cooperation between user and service provider to prevent data secondary use. For this purpose, we design a novel game-theoretical algorithm, in which user can unilaterally persuade service provider to cooperate in data trading, achieving a “win-win” situation. Besides, we validate our proposed algorithm performance through in-depth theoretical analysis and comprehensive simulations.
Online Social Networks (OSNs) have transformed the way that people socialize. However, when OSNs ... more Online Social Networks (OSNs) have transformed the way that people socialize. However, when OSNs bring people convenience, privacy leakages become a growing worldwide problem. Although several anonymization approaches are proposed to protect information of user identities and social relationships, existing de-anonymization techniques have proved that users in the anonymized network can be re-identified by using an external reference social network collected from the same network or other networks with overlapping users. In this paper, we propose a novel social network de-anonymization mechanism to explore the impact of user attributes on the accuracy of de-anonymization. More specifically, we propose an approach to quantify diversities of user attribute values and select valuable attributes to generate the multipartite graph. Next, we partition this graph into communities, and then map users on the community level and the network level respectively. Finally, we employ a real-world dataset collected from Sina Weibo to evaluate our approach, which demonstrates that our mechanism can achieve a better de-anonymization accuracy compared with the most influential de-anonymization method.
2017 IEEE Symposium on Privacy-Aware Computing (PAC), 2017
Due to the increasing concerns of securing private information, context-aware Internet of Things ... more Due to the increasing concerns of securing private information, context-aware Internet of Things (IoT) applications are in dire need of supporting data privacy preservation for users. In the past years, game theory has been widely applied to design secure and privacy-preserving protocols for users to counter various attacks, and most of the existing work is based on a two-player game model, i.e., a user/defender-attacker game. In this paper, we consider a more practical scenario which involves three players: a user, an attacker, and a service provider, and such a complicated system renders any two-player model inapplicable. To capture the complex interactions between the service provider, the user, and the attacker, we propose a hierarchical two-layer three-player game framework. Finally, we carry out a comprehensive numerical study to validate our proposed game framework and theoretical analysis.
For the shortest link scheduling (SLS), i.e., scheduling a given set of links with minimum time s... more For the shortest link scheduling (SLS), i.e., scheduling a given set of links with minimum time slots, we consider the distributed algorithm design by using the locality of the protocol model with high fidelity under the Rayleigh fading. Different from most previous works, focusing on distributed algorithm design under the deterministic SINR model, which ignores the fading effects in signal propagation, we first show that a successful link of protocol model is also feasible under the deterministic SINR model, then it can be scheduled successfully with high probability under the Rayleigh fading, by upper-bounding interference outside interference range of protocol model. Then on the basis of this key conclusion, we design LLS-SLS algorithm to solve SLS within (2e∆ T max + 1)δ log 2 ∆ T max time slots for a constant δ. Specifically, ∆ T max is the number of a sender's neighbors within some certain range, and can be upper-bounded. Next, based on the concept of random contention, we design CLLS algorithm to schedule all links after costing 4(δ + 1)∆ T max ln ∆ T max + 1 time slots. In addition, extensive simulations evaluate the performance of two proposed algorithms.
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2015
This paper proposes MpDroid, an API-level multi-policy access control enforcement based on the 'R... more This paper proposes MpDroid, an API-level multi-policy access control enforcement based on the 'Rule Set Based Access Control' (RSBAC) framework. In the MpDroid, we monitor and manage resources, services and Android inter-component communication (ICC) based on multiple policies mechanism, so as to restrict the applications access to the sensitive APIs and prevent privilege escalation attacks. When installing an application, we build the mapping relationships between sensitive APIs and the application capability. Each rule in the user-defined and context policies is regarded as a limitation of the application capability. Moreover, system policy is used for matching the illegal ICC communications. Experimental results showed that we can realize the API-level access control for Android middleware, and prevent the illegal ICC communication on the Android 4.1.4.
2009 Eighth IEEE/ACIS International Conference on Computer and Information Science, 2009
We consider the problem of indexing high-dimensional data for answering (approximate) similarity-... more We consider the problem of indexing high-dimensional data for answering (approximate) similarity-search queries. Similarity indexes prove to be important in a wide variety of settings. In this paper we have presented an efficient similarity search for REIK network called SSREIK which is a novel framework that dynamically structure, in order to build distributed routing information. It allows the evaluation of
2010 2nd International Conference on Advanced Computer Control, 2010
ABSTRACT Architecture-centric system is s self-adaptive goal&#39;s implementation platform, r... more ABSTRACT Architecture-centric system is s self-adaptive goal&#39;s implementation platform, requirement is processed as business services, which is the driven power, agent internetware is built as granularity service&#39;s carrier, achieves autonomous barycenter model about architecture rules, completes its self-adaptive reasoning, in order to get services&#39; discovery and design. About flexible real-time business sequence&#39;s logic, service agent&#39;s decision-making cluster is the learning adjustment mechanism, which can finish implementation topology&#39;s evolution of architecture-centric service.
To evaluate whether the measurement of maternal serum inhibin A, activin A and placental growth f... more To evaluate whether the measurement of maternal serum inhibin A, activin A and placental growth factor (PlGF) at 12 + 0 to 16 + 0 weeks of gestation alone or in combination with second-trimester uterine artery Doppler pulsatility index (PI) is useful in predicting pre-eclampsia. This was a case-control study of pre-eclampsia. From pregnant women attending their first antenatal examination at 12-16 weeks we collected serum samples and stored them at - 80 °C. All patients also underwent uterine artery Doppler examination to measure the PI at 22-24 weeks&#39; gestation. We retrieved for analysis frozen samples from women who then developed pre-eclampsia, as well as three control samples per woman, matched for gestational age and storage time. Inhibin A, activin A and PlGF were measured using an enzyme-linked immunosorbent assay (ELISA) by an examiner who was blinded to the pregnancy outcome. There were 31 cases with pre-eclampsia and 93 controls. Second-trimester uterine artery PI and marker levels were expressed as multiples of the median (MoM). The uterine artery PI was increased in pregnancies with pre-eclampsia compared with controls (mean ± SD, 1.45 ± 0.31 MoM vs. 1.02 ± 0.25 MoM, P &lt; 0.001), as were the level of inhibin A (mean ± SD, 1.57 ± 0.34 MoM vs. 1.08 ± 0.43 MoM, P &lt; 0.001) and the level of activin A (mean ± SD, 1.68 ± 0.38 MoM vs. 1.06 ± 0.42 MoM, P &lt; 0.001). The level of PlGF was decreased in pre-eclampsia compared with controls (mean ± SD, 0.69 ± 0.23 MoM vs. 1.00 ± 0.26 MoM, P &lt; 0.001). Receiver-operating characteristics curves were analyzed for controls and cases and areas under the curve (AUC) were 0.796 (95% CI, 0.712-0.880, P &lt; 0.001) for inhibin A, 0.823 (95% CI, 0.746-0.899, P &lt; 0.001) for activin A, 0.831 (95% CI, 0.752-0.910, P &lt; 0.001) for PlGF and 0.851 (95% CI, 0.783-0.920, P &lt; 0.001) for uterine artery PI. The combination of activin A, inhibin A and PI using logistic regression analysis yielded an AUC of 0.907 (95% CI, 0.830-0.938, P &lt; 0.001) with a sensitivity of 87% and a specificity of 80%. The combination of activin A, PlGF and PI gave an AUC of 0.925 (95% CI, 0.852-0.978, P &lt; 0.001) with a sensitivity of 90% and a specificity of 80%. Combining all four markers gave an AUC of 0.941 (95% CI, 0.891-0.990, P &lt; 0.001) with a sensitivity of 93% and a specificity of 80%. Early second-trimester serum inhibin A, activin A, PlGF and second-trimester uterine artery Doppler PI may add further information for the prediction of pre-eclampsia. The combination of the three serum markers and uterine artery Doppler PI has the highest prediction value for pre-eclampsia.
Purpose: To investigate the bilateral symmetry of the global corneal topography in normal corneas... more Purpose: To investigate the bilateral symmetry of the global corneal topography in normal corneas with a wide range of curvature, astigmatism and thickness values Design: Cross-Sectional Study Methods: Topography images were recorded for the anterior and posterior surfaces of 342 participants using a Pentacam. Elevation data were fitted to a general quadratic model that considered both translational and rotational displacements. Comparisons between fellow corneas of estimates of corneal shape parameters (elevation, radius in two main directions, R x and R y , and corresponding shape factors, Q x and Q y) and corneal position parameters (translational displacements: x 0 , y 0 and z 0 , and rotational displacements: a, b and c) were statistically analyzed. Results: The general quadratic model provided average RMS of fit errors with the topography data of 1.760.6 mm and 5.761.3 mm in anterior and posterior corneal surfaces. The comparisons showed highly significant bilateral correlations with the differences between fellow corneas in R x , R y , Q x and Q y of anterior and posterior surfaces remaining insignificantly different from zero. Bilateral differences in elevation measurements at randomly-selected points in both corneal central and peripheral areas indicated strong mirror symmetry between fellow corneas. The mean geometric center (x 0 , y 0 , z 0) of both right and left corneas was located on the temporal side and inferior-temporal side of the apex in anterior and posterior topography map, respectively. Rotational displacement angle a along X axis had similar distributions in bilateral corneas, while rotation angle b along Y axis showed both eyes tilting towards the nasal side. Further, rotation angle c along Z axis, which is related to corneal astigmatism, showed clear mirror symmetry. Conclusions: Analysis of corneal topography demonstrated strong and statistically-significant mirror symmetry between bilateral corneas. This characteristic could help in detection of pathological abnormalities, disease diagnosis, measurement validation and surgery planning.
Purpose. To detect possible differences in clinical outcomes between wavefront-guided laser in si... more Purpose. To detect possible differences in clinical outcomes between wavefront-guided laser in situ keratomileusis (LASIK) and wavefront-optimized LASIK for the treatment of myopia. Methods. A comprehensive literature search of Cochrane Library, MEDLINE, and EMBASE to identify relevant trials comparing LASIK with wavefront-guided and wavefront-optimized. A meta-analysis was performed on the results of the reports. Statistical analysis was performed using RevMan 5.0 software. Results. Seven articles describing a total of 930 eyes were identified. There were no statistically significant differences in the final proportion of eyes achieving uncorrected distance visual acuity of 20/20 or better [odds ratio, 1.04; 95% confidence interval (CI), 0.66 to 1.65; p ϭ 0.86], manifest refractive spherical equivalent within Ϯ 0.50 D of the target (odds ratio, 0.96; 95% CI, 0.53 to 1.75; p ϭ 0.90). No patient lost Ն2 lines of distance-corrected visual acuity at posttreatment. The changes in higher order aberrations were not statistically significant different between the two groups with preoperative root-mean-square (RMS) higher order aberrations Ͻ0.3 m (weighted mean difference, 0.01; 95% CI, Ϫ0.02 to 0.04; p ϭ 0.57). However, wavefront-guided had a significant better postoperative aberration profile than wavefront-optimized with preoperative RMS higher order aberrations Ͼ0.3 m (weighted mean difference, Ϫ0.10; 95% CI, Ϫ0.15 to Ϫ0.06; p Ͻ 0.00001). Conclusions. Both wavefront-guided and wavefront-optimized LASIK have shown excellent efficacy, safety, and predictability. The wavefront-guided technology may be a more appropriate choice for patients who have preoperative RMS higher order aberrations Ͼ0.3 m.
The biomechanical changes in rabbit cornea preserved in storage media with different glucose conc... more The biomechanical changes in rabbit cornea preserved in storage media with different glucose concentrations are experimentally assessed. Two groups of eight fresh rabbit corneas were preserved for 10 days in storage medium Optisol-GS with glucose concentrations of 14 and 28 mM, respectively. Eight additional corneas preserved, glucose-free, in the same medium served as the control group. All specimens were tested under inflation conditions up to 45 mmHg posterior pressure, and the pressuredeformation data obtained experimentally were analyzed using shell theory to derive the stressestrain behavior. Comparisons were held between the three specimen groups in order to determine the effect of glucose concentration on corneal biomechanical behavior and thickness. After storage, the mean central corneal thickness in the control, low-glucose and high-glucose groups underwent statistically significant increases of 38.7 AE 11.3%, 45.4 AE 7.6% and 50.6 AE 8.6%, respectively. The corneas also demonstrated consistent stiffness increases with higher glucose concentrations. The tangent modulus values determined at different pressure levels between 10 and 40 mmHg underwent statistically significant increases with glucose level (P < 0.05). Compared to the control group, other specimens had higher tangent modulus by 17e20% on average with low glucose and 30e37% with high-glucose concentration. The results of the study indicate that the influence of the high-glucose level commonly experienced in diabetes on the biomechanical stiffness of the cornea should be considered in clinical management and in understanding corneal ectasia, glaucoma and the response to refractive surgery.
IEEE Transactions on Knowledge and Data Engineering
In this paper, we study a distributed privacy-preserving learning problem in general social netwo... more In this paper, we study a distributed privacy-preserving learning problem in general social networks. Specifically, we consider a very general problem setting where the agents in a given multi-hop social network are required to make sequential decisions to choose among a set of options featured by unknown stochastic quality signals. Each agent is allowed to interact with its peers through multi-hop communications but with its privacy preserved. To serve the above goals, we propose a four-staged distributed social learning algorithm. In a nutshell, our algorithm proceeds iteratively, and in every round, each agent i) randomly perturbs its adoption for privacy-preserving purpose, ii) disseminates the perturbed adoption over the social network in a nearly uniform manner through random walking, iii) selects an option by referring to its peers' perturbed latest adoptions, and iv) decides whether or not to adopt the selected option according to its latest quality signal. By our solid theoretical analysis, we provide answers to two fundamental algorithmic questions about the performance of our four-staged algorithm: on one hand, we illustrate the convergence of our algorithm when there are a sufficient number of agents in the social network, each of which are with incomplete and perturbed knowledge as input; on the other hand, we reveal the quantitative trade-off between the privacy loss and the communication overhead towards the convergence. We also perform extensive simulations to validate our theoretical analysis and to verify the efficacy of our algorithm.
Deep learning is a thriving field currently stuffed with many practical applications and active r... more Deep learning is a thriving field currently stuffed with many practical applications and active research topics. It allows computers to learn from experience and to understand the world in terms of a hierarchy of concepts, with each being defined through its relations to simpler concepts. Relying on the strong capabilities of deep learning, we propose a convolutional generative adversarial network-based (Conv-GAN) framework titled MalFox, targeting adversarial malware example generation against third-party black-box malware detectors. Motivated by the rival game between malware authors and malware detectors, MalFox adopts a confrontational approach to produce perturbation paths, with each formed by up to three methods (namely Obfusmal, Stealmal, and Hollowmal) to generate adversarial malware examples. To demonstrate the effectiveness of MalFox, we collect a large dataset consisting of both malware and benignware programs, and investigate the performance of MalFox in terms of accuracy, detection rate, and evasive rate of the generated adversarial malware examples. Our evaluation indicates that the accuracy can be as high as 99.0% which significantly outperforms the other 12 well-known learning models. Furthermore, the detection rate is dramatically decreased by 56.8% on average, and the average evasive rate is noticeably improved by up to 56.2%.
Social network data is widely shared, forwarded and published to third parties, which led to the ... more Social network data is widely shared, forwarded and published to third parties, which led to the risks of privacy disclosure. Even thought the network provider always perturbs the data before publishing it, attackers can still recover anonymous data according to the collected auxiliary information. In this paper, we transform the problem of de-anonymization into node matching problem in graph, and the de-anonymization method can reduce the number of nodes to be matched at each time. In addition, we use spectrum partitioning method to divide the social graph into disjoint subgraphs, and it can effectively be applied to large-scale social networks and executed in parallel by using multiple processors. Through the analysis of the influence of power-law distribution on de-anonymization, we synthetically consider the structural and personal information of users which made the feature information of the user more practical
2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS)
The extreme boom of online social networks paves the way for rumor propagation, which may incur a... more The extreme boom of online social networks paves the way for rumor propagation, which may incur an economic loss and cause further public panic. Hence, there is a pressing need to develop countermeasures for reducing side effects posed by rumors. Different from the state-of-the-art work that mostly conducted micro-perspective studies, our paper focuses on a macro-perspective one. In detail, our study neglects technical details and analyzes the antagonistic dynamics between the rumormonger and the rumor suppressor, which provides a deep understanding of the overall development trend of rumor propagation. To reveal the potentials of the rumormonger and the rumor suppressor, the competence-oriented analysis is proposed, where the sufficient and necessary conditions for the existence of the Nash equilibrium in this rumor game are proved rigorously, helping us to derive steady ratios of people who trust or deny a rumor. To figure out the optimal strategy to strike back the rumormonger, the target-oriented analysis is conducted, in which the analytical solutions when the strategies of both players are static are solved and an iteration algorithm is employed to obtain the numerical solutions when their strategies are dynamic. Both numerical and real-world-data based simulations are adopted to verify the proposed competence-oriented and target-oriented analyses.
Hybrid beamforming (HBF) is a promising approach to obtain a better balance between hardware comp... more Hybrid beamforming (HBF) is a promising approach to obtain a better balance between hardware complexity and system performance in massive MIMO communication systems. However, the HBF optimization problem is a challenging task due to its nonconvex property in terms of design complexity and spectral efficiency (SE) performance. In this work, a low-complexity convolutional neural network (CNN)-based HBF algorithm is proposed to solve the SE maximization problem under the constant modulus constraint and transmit power constraint in a multiple-input single-output (MISO) system. The proposed CNN framework uses multiple convolutional blocks to extract more channel features. Considering that the solutions for the HBF are hard to obtain, we derive an unsupervised learning mechanism to avoid any labeled data when training the constructed CNN. We discuss the performance of the proposed algorithm in terms of both the generalization ability for multiple CSIs and the specific solving ability for ...
Social network data is widely shared, forwarded and published to third parties, which led to the ... more Social network data is widely shared, forwarded and published to third parties, which led to the risks of privacy disclosure. Even thought the network provider always perturbs the data before publishing it, attackers can still recover anonymous data according to the collected auxiliary information. In this paper, we transform the problem of de-anonymization into node matching problem in graph, and the de-anonymization method can reduce the number of nodes to be matched at each time. In addition, we use spectrum partitioning method to divide the social graph into disjoint subgraphs, and it can effectively be applied to large-scale social networks and executed in parallel by using multiple processors. Through the analysis of the influence of power-law distribution on de-anonymization, we synthetically consider the structural and personal information of users which made the feature information of the user more practical.
Uploads
Papers by Jiguo Yu