Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2017 IEEE International Symposium on Information Theory (ISIT)
…
5 pages
1 file
We consider the setting of a master server who possesses confidential data (genomic, medical data, etc.) and wants to run intensive computations on it, as part of a machine learning algorithm for example. The master wants to distribute these computations to untrusted workers who have volunteered or are incentivized to help with this task. However, the data must be kept private (in an information theoretic sense) and not revealed to the individual workers. The workers may be busy, or even unresponsive, and will take a random time to finish the task assigned to them. We are interested in reducing the aggregate delay experienced by the master. We focus on linear computations as an essential operation in many iterative algorithms. A known solution is to use a linear secret sharing scheme to divide the data into secret shares on which the workers can compute. We propose to use instead new secure codes, called Staircase codes, introduced previously by two of the authors. We study the delay induced by Staircase codes which is always less than that of secret sharing. The reason is that secret sharing schemes need to wait for the responses of a fixed fraction of the workers, whereas Staircase codes offer more flexibility in this respect. For instance, for codes with rate R = 1/2 Staircase codes can lead to up to 40% reduction in delay compared to secret sharing.
2021
Multi-party computation (MPC) is promising for privacy-preserving machine learning algorithms at edge networks, like federated learning. Despite their potential, existing MPC algorithms fail short of adapting to the limited resources of edge devices. A promising solution, and the focus of this work, is coded computation, which advocates the use of error-correcting codes to improve the performance of distributed computing through “smart” data redundancy. In this paper, we focus on coded privacy-preserving computation using Shamir’s secret sharing. In particular, we design novel coded privacy-preserving computation mechanisms; MatDot coded MPC (MatDot-CMPC) and PolyDot coded MPC (PolyDot-CMPC) by employing recently proposed coded computation algorithms; MatDot and PolyDot. We take advantage of the “garbage terms” that naturally arise when polynomials are constructed in the design of MatDotCMPC and PolyDot-CMPC to reduce the number of workers needed for privacy-preserving computation. ...
Data outsourcing allows data owners to keep their data at untrusted clouds that do not ensure the privacy of data and/or computations. One useful framework for fault-tolerant data processing in a distributed fashion is MapReduce, which was developed for trusted private clouds. This paper presents algorithms for data outsourcing based on Shamir's secret-sharing scheme and for executing privacy-preserving SQL queries such as count, selection including range selection, projection, and join while using MapReduce as an underlying programming model. The proposed algorithms prevent the untrusted cloud to know the database or the query while also preventing output size and access-pattern attacks. Interestingly, our algorithms do not need the database owner, which only creates and distributes secret-shares once, to be involved to answer any query, and hence, the database owner also cannot learn the query. We evaluate the efficiency of the algorithms on parameters: (i) the number of communication rounds (between a user and a cloud), (ii) the total amount of bit flow (between a user and a cloud), and (iii) the computational load at the user-side and the cloud-side.
Data outsourcing allows data owners to keep their data in public clouds, which do not ensure the privacy of data and computations. One fundamental and useful framework for processing data in a distributed fashion is MapReduce. In this paper, we investigate and present techniques for executing MapReduce computations in the public cloud while preserving privacy. Specifically , we propose a technique to outsource a database using Shamir secret-sharing scheme to public clouds, and then, provide privacy-preserving algorithms for performing search and fetch, equijoin, and range queries using MapReduce. Consequently , in our proposed algorithms, the public cloud cannot learn the database or computations. All the proposed algorithms eliminate the role of the database owner, which only creates and distributes secret-shares once, and minimize the role of the user, which only needs to perform a simple operation for result reconstructing. We evaluate the efficiency by (i) the number of communication rounds (between a user and a cloud), (ii) the total amount of bit flow (between a user and a cloud), and (iii) the computational load at the user-side and the cloud-side.
ArXiv, 2021
Stragglers, Byzantine workers, and data privacy are the main bottlenecks in distributed cloud computing. Several prior works proposed coded computing strategies to jointly address all three challenges. They require either a large number of workers, a significant communication cost or a significant computational complexity to tolerate malicious workers. Much of the overhead in prior schemes comes from the fact that they tightly couple coding for all three problems into a single framework. In this work, we propose Verifiable Coded Computing (VCC) framework that decouples Byzantine node detection challenge from the straggler tolerance. VCC leverages coded computing just for handling stragglers and privacy, and then uses an orthogonal approach of verifiable computing to tackle Byzantine nodes. Furthermore, VCC dynamically adapts its coding scheme to tradeoff straggler tolerance with Byzantine protection and vice-versa. We evaluate VCC on compute intensive distributed logistic regression...
2021 XVII International Symposium "Problems of Redundancy in Information and Control Systems" (REDUNDANCY)
This paper considers the problem of multi-server Private Linear Computation, under the joint and individual privacy guarantees. In this problem, identical copies of a dataset comprised of K messages are stored on N non-colluding servers, and a user wishes to obtain one linear combination of a D-subset of messages belonging to the dataset. The goal is to design a scheme for performing the computation such that the total amount of information downloaded from the servers is minimized, while the privacy of the D messages required for the computation is protected. When joint privacy is required, the identities of all of these D messages must be kept private jointly, and when individual privacy is required, the identity of every one of these D messages must be kept private individually. In this work, we characterize the capacity, which is defined as the maximum achievable download rate, under both joint and individual privacy requirements. In particular, we show that when joint privacy is required the capacity is given by (1 + 1/N + • • • + 1/N K−D) −1 , and when individual privacy is required the capacity is given by (1 + 1/N + • • • + 1/N ⌈K/D⌉−1) −1 assuming that D divides K, or K (mod D) divides D. Our converse proofs are based on reduction from two variants of the multi-server Private Information Retrieval problem in the presence of side information. Our achievability schemes build up on our recently proposed schemes for single-server Private Linear Transformation and the multiserver private computation scheme proposed by Sun and Jafar. Using similar proof techniques, we also establish upper and lower bounds on the capacity for the cases in which the user wants to compute L (potentially more than one) linear combinations. Specifically, we show that when joint privacy is required the capacity is upper bounded by (
Innovations in Computer Science (ICS)(January 2011); Also Private and Parennial Distributed Computation. In: Workshop on Cryptography and Security in Clouds (CSC), 2011
Abstract: In this paper we consider the problem of n agents wishing to perform a given computation on common inputs in a privacy preserving manner, in the sense that even if the entire memory contents of some of them are exposed, no information is revealed about the state of the computation, and where there is no a priori bound on the number of inputs. The problem has received ample attention recently in the context of swarm computing and Unmanned Aerial Vehicles (UAV) that collaborate in a common mission, and schemes ...
2021
In this paper, we propose a secret sharing based secure multiparty computation (SMC) protocol for computing the minimum spanning trees in dense graphs. The challenges in the design of the protocol arise from the necessity to access memory according to private addresses, as well as from the need to reduce the round complexity. In our implementation, we use the single-instruction-multiple-data (SIMD) operations to reduce the round complexity of the SMC protocol; the SIMD instructions reduce the latency of the network among the three servers of the SMC platform. We present a state-of-the-art parallel privacy-preserving minimum spanning tree algorithm which is based on Prim's algorithm for finding a minimum spanning tree (MST) in dense graphs. Performing permutation of the graph with sharemind to be able to perform the calculation of the MST on the shuffled graph outside the environment. We compare our protocol to the state of the art and find that its performance exceeds the existing protocols when being applied to dense graphs.
Lecture Notes in Computer Science, 2013
In the problem of private "swarm" computing, n agents wish to securely and distributively perform a computation on common inputs, in such a way that even if the entire memory contents of some of them are exposed, no information is revealed about the state of the computation. Recently, Dolev, Garay, Gilboa and Kolesnikov [ICS 2011] considered this problem in the setting of informationtheoretic security, showing how to perform such computations on input streams of unbounded length. The cost of their solution, however, is exponential in the size of the Finite State Automaton (FSA) computing the function. In this work we are interested in efficient (i.e., polynomial time) computation in the above model, at the expense of minimal additional assumptions. Relying on the existence of one-way functions, we show how to process unbounded inputs (but of course, polynomial in the security parameter) at a cost linear in m, the number of FSA states. In particular, our algorithms achieve the following: In the case of (n, n)-reconstruction (i.e., in which all n agents participate in the reconstruction of the distributed computation) and at most n − 1 agents are corrupted, the agent storage, the time required to process each input symbol, and the time complexity for reconstruction are all O(mn). In the case of (n − t, n)-reconstruction (where only n − t agents take part in the reconstruction) and at most t agents are corrupted, the agents' storage and time required to process each input symbol are O(m n−1 n−t). The complexity of reconstruction is O(mt). We achieve the above through a carefully orchestrated use of pseudo-random generators and secret-sharing, and in particular a novel share re-randomization technique which might be of independent interest.
PNUD-CHILE, 2019
En esta Guía se abordan, por una parte, los principales aspectos relacionados con la consideración de la biodiversidad en los procesos de gestación, implementación y evaluación de los Acuerdos de Producción Limpia (APL) del sector agrícola, y por otro, se entregan antecedentes y recomendaciones para aplicar 24 Buenas Prácticas Agrícolas en Biodiversidad (BPAB) útiles para estos propósitos
Ancient Egypt, New Technology The Present and Future of Computer Visualization, Virtual Reality and Other Digital Humanities in Egyptology, 2023
3D visualizations of heritage objects such as ancient Egyptian coffins can be better used for general and specialistic studies if they also provide annotations. This paper presents the system of annotations developed for the "Book of the Dead in 3D Project," which applies photogrammetry and digital annotations to coffins and sarcophagi produced in the 1st millennium bce. The annotated models of the project include the transcription, translation and transliteration of the magical texts inscribed on the coffins, which can be interactively read by the user while navigating the 3D model.
Sembramos, comemos y vivimos, 2022
Crocco, G., Engelen, E.-M. (eds.) Kurt Gödel: Philosopher-Scientist, Aix en Provence, Presses Universitaires de Provence, 2015, p. 413-442
RESISTANCES. Journal of the Philosophy of History, 2020
Revista Barataria, 2008
Responsible AI and Analytics for an Ethical and Inclusive Digitized Society, 2021
The Journal of Trauma: Injury, Infection, and Critical Care, 2008
Deleted Journal, 2023
Journal of The Korean Chemical Society, 2010
Journal of Applied Oral Science, 2006
Revista De Ensenanza De La Fisica, 2008
Egyptian Journal of Medical Human Genetics, 2021
Zoology in the Middle East, 2009
2015 12th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2015