Front. Comput. Sci.
DOI 10.1007/s11704-016-6148-4
Primitives towards verifiable computation: a survey
1
, Haibo HONG2, Jing LI1 , Hassan DAWOOD3,
Haseeb AHMAD1 , Licheng WANG
Manzoor AHMED4 , Yixian YANG1
1
State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications,
Beijing 100876, China
2 School of Computer Science and Information Engineering, Zhejiang Gongshang University, Hangzhou 310018, China
3 Department of Computer Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
4 Future Internet & Communication Lab, Department of Electronic Engineering, Tsinghua University,
Beijing 100084, China
c Higher Education Press and Springer-Verlag Berlin Heidelberg 2017
Abstract Verifiable computation (VC) paradigm has got
the captivation that in real term is highlighted by the concept of third party computation. In more explicate terms,
VC allows resource constrained clients/organizations to securely outsource expensive computations to untrusted service providers, while acquiring the publicly or privately verifiable results. Many mainstream solutions have been proposed to address the diverse problems within the VC domain.
Some of them imposed assumptions over performed computations, while the others took advantage of interactivity
/non-interactivity, zero knowledge proofs, and arguments.
Further proposals utilized the powers of probabilistic checkable or computationally sound proofs. In this survey, we
present a chronological study and classify the VC proposals
based on their adopted domains. First, we provide a broader
overview of the theoretical advancements while critically
analyzing them. Subsequently, we present a comprehensive
view of their utilization in the state of the art VC approaches.
Moreover, a brief overview of recent proof based VC systems
is also presented that lifted up the VC domain to the verge
of practicality. We use the presented study and reviewed results to identify the similarities and alterations, modifications,
and hybridization of different approaches, while comparing
their advantages and reporting their overheads. Finally, we
Received March 14, 2016; accepted September 21, 2016
E-mail:
[email protected]
discuss implementation of such VC based systems, their applications, and the likely future directions.
Keywords verifiable computation, cloud computation, interactive, non-interactive, zero knowledge, probabilistic
checkable proofs, computationally sound proofs
1
Introduction
Presently, the world is progressively being consolidated with
an emergent volume of information that is available in the
digital format. Hence, the amount of sensitive, private, or
otherwise personal information gathered and stored is persistently mounting. While dealing with such scenarios, one
of the most challenging issues emerges when businesses
need to buy a heavy storage space and computational powers
from untrusted third parties (a.k.a. service providers), such as
cloud. In certain circumstances, these service providers (SPs)
can alter or forge the confidential data. They might provide
plausible results without performing any computations over
the data or may even have strong financial incentives for extending incorrect results of the requested computations. Another venue is the explosion of resource constrained devices
(such as wireless sensors, security access cards, notebooks
and cell-phones) that may also necessitate outsourcing of the
expensive computations (like photo manipulation, data analysis and cryptographic operations) to the untrusted SPs. The
2
Front. Comput. Sci.
compilation of this outsourcing process along with the verifiable property has nominated a practical paradigm known as
verifiable computation (or verified computing or verified computation) [1]. The concept of verifiable computation (VC) is
absolutely relevant to several real world scenarios, as illustrated by the following examples [2].
Volunteer computing is an application of distributed computing in which computer owners offer their computational
resources (storage and processing power) for computing the
small units of one or more projects. The basic mechanism is
to split large computations into small units, distribute these
units to volunteers for processing, and congregate the results
through an easier mechanism. The Great Internet Mersenne
Prime Search (GIMPS) was the first project that introduced
the concept of volunteer computing. A complete middleware system for volunteer computing known as Berkeley
Open Infrastructure for Network Computing (BOINC) includes a client, client graphical user interface, application
runtime system, server software, and the software implementing a project website [3, 4]. Some famous projects using
the BOINC platform are Predictor@home, SETI@home [5],
AQUA@home, Einstein@home, and Rosetta@home.
Cloud computing is another venue that provides the illusion of unlimited computing utilities to both the organizations and the individual clients at the affordable price. This
paradigm also concentrates on boosting the effectiveness of
the shared resources. According to the National Institute of
Standards and Technology (NIST), cloud computing has five
fundamental characteristics, four deployment models, and
three service models [6]. The fundamental characteristics are
listed as : 1) on-demand self-service; 2) broad network access; 3) resource pooling; 4) rapid elasticity; and 5) measured
service. The deployment models are termed as: 1) private
cloud; 2) community cloud; 3) public cloud; and 4) hybrid
cloud. The service models include: 1) Software as a Service
(SaaS); 2) Platform as a Service (PaaS); and 3) Infrastructure
as a Service (IaaS).
The aforementioned on-demand services are beneficial for
the businesses in the following ways: 1) no need to keep and
maintain the costly hardware; 2) only require paying for what
is needed; 3) provide the location independence; 4) scalability is very easy; and 5) performance improvements of an organization. Apart from the above stated advantages, the businesses may also face some trust issues such as: 1) the confidential data can be altered or forged; 2) the required operations may not be performed on the exact data; and 3) the
results provided after the computations may not be accurate.
Albeit of all the aforementioned attractive services, there
might be several instances that could compromise the stored
information or computation results. In volunteer computing,
for example, a dishonest volunteer can advertently produce
errors during computation process; or in cloud services, a malicious worker could have some financial incentives for providing incorrect results to the client. Furthermore, these services perform computation in black-box manner, hence may
invite untraceable faults such as hardware problems, exploitation of data during storage or transmission, misconfiguration,
and many more alike. Therefore, the questions that arise are:
how can it be assured to the clients that SP has faithfully conducted the specified computation without revealing the confidential data? In other words, can clients efficiently verify
that the output of the functions is honestly computed by the
SP on the right outsourced data without revealing the confidential data? Can the results be checked without replication
or attestation of the computation?
To answer the aforementioned questions and to resolve the
problems, a practical paradigm VC is introduced, which is
founded on volumes of the preceding works . The generalized framework for VC is presented in Fig. 1.
Fig. 1
Verifiable computation framework
The growing desire to outsource the computational tasks
from a relatively weaker computational device (delegator or
client) to a more powerful SPs, and the problem of malicious activities of workers/intruders to alter and forge the
results, motivated the formalization of the notion security
and verifiability of outsourced computation. However, security of the outsourced operations (computations) necessitates
more and more properties that must be concurrently verified. Presently, this security is guaranteed by proof/arguments
based approaches and cryptography. Nonetheless, the range
of security properties usually formulates the underlying primitives quite difficult, and is therefore harder to implement in
practice. In a world, where we desire to go faster but the devices are potentially less powerful at all times, it becomes
essential to come across the right solutions. Precisely, when
Haseeb AHMAD et al. Primitives towards verifiable computation: a survey
one wishes to maintain the privacy of clients, these primitives
become more complex, as it is needed to combine apparently
conflicting properties such as integrity and anonymity. Thus
prompts to explore suitable tools for efficient VC operations,
while not compromising the achievement of aforementioned
properties. So that after getting the results of outsourced computation from the SPs, the client could be capable to verify
the accuracy of that outcome in a way, that (a) the client utilizes considerably less resources than that are actually needed
to perform the computation from scratch, and (b) the SPs
could not use any extra resources rather than what are actually required for performing the computation. Above all, a
key factor is that the confidential information of client must
not be compromised during this whole process.
The first steps towards VC are considered to the proposals
of Goldwesser et al. [7] and Babai [8], who formalized the
notion of interactive proofs by employing interactive and randomized verification mechanism in 1985. Interactive proofs
consider a scenario, in which the polynomial time verifier
(client) communicates with the powerful but untrusted exponential time prover (worker), in order to verify the correctness
of the computation results returned by prover. In the same
paper, Goldwesser et al. [7] also considered an extension of
interactive proofs known as zero knowledge (ZK) proof systems that take into account a scenario, in which prover convinces verifier that statement is true without divulging any
related information. The difference is only the zero knowledge property1) that is not considered by interactive proof.
Later in 1991, Babai et al. pointed towards a setup employing which the verification of computation performed by the
group of supercomputers could be checked by a single reliable PC [9]. This vision is considered as the ultimate goal of
practical VC. In actual terms, the authors improved the performance of the interactive proofs while reducing the computational power difference between prover and verifier. This
seminal work laid the foundation for probabilistic checkable
proof (PCP) systems that considered a scenario, in which a
polynomial time randomized algorithm (verifier) has some
restriction on randomness, and was bounded to read limited
number of bits of the proof prepared by prover [10, 11]. The
main shortcoming of PCPs was to verify long sized proof
that in actual terms was infeasible to be handled by verifier. Working on this shortcoming, Kilian investigated cryptographic commitment based arguments in 1992, in which at
first, prover commits to the encoded transcript, later verifier
generates and asks queries about specific locations of its own
interest, while responding, prover cannot provide the invalid
1)
3
answers as it would be caught by verifier due to the commitment [12, 13]. Till these refinements, the proof systems were
interactive in nature, Micali presented the non-interactive perspective of the aforementioned argument systems by employing random oracle model in 2000 [14]. This non-interactive
perspective is called computationally sound (CS) proofs, in
which it is very easy to construct proofs for the valid statement, while almost impossible to generate proof for invalid
statement. In the subsequent works, many refinements of interactive proofs [15–20], ZK proofs and arguments [21–31],
PCPs [9, 20, 32–36] and CS proofs [37–40] are suggested by
scholastic community. Another series of solutions towards
VC were put forward by the researchers during the session
1994–2009 based on assumptions such as; 1) the usage of
trusted hardware (secure crypto-processors, trusted platform
modules etc.) [41, 42], 2) attestation [43, 44], 3) replication
[45, 46] and 4) auditability [47, 48]. These proposals are discussed in detail within Section 2.
Until 2007, one aspect of the proof/arguments based approaches was complexity theory that considered the power of
proofs under some restrictions such as randomness, verification time, depth or rounds. While the other lens was the cryptographic theory focused on intractable languages with the
bound that all the parties (i.e., prover and verifier) must have
to run in polynomial time. Albeit of many refinements, both
of these aspects focused on efficient verification of proofs instead of efficient generation of proofs. Goldwasser et al. revisited the interactive protocols for the tractable languages
and proposed a public coins interactive approach (muggles
approach), in which they considered an efficient and polynomial time (honest) prover along with a super-efficient verifier,
whose complexity is almost linear to the input size and polylogarithmic to the computation of language [2]. In 2009, Gentry provided a remarkable result in the form of fully homomorphic encryption scheme [38] utilizing which along with
Yao garbled circuit [49], Gennaro et al. formalized the notion
of non-interactive VC scheme in 2010 [1]. Some of its subsequent works are based on theoretical concepts [37, 50–52],
because the reflected results are hard to materialized as described in Section 2. Recently, it appears to have turned a
corner, and several research efforts show that VC for large
classes of functions and even generic outsourced computation, have the prospective to be implemented for practical
scenarios [53–61]. These achievements are realized and enabled by the continuous improvement in algorithms coupled
by the innovations of new supportive tools. Alongside advancements in computational power, parallel processing, and
In case of true statement, the cheating verifier cannot learn anything apart from the integrity of the statement.
4
Front. Comput. Sci.
better storage capabilities, such breakthroughs also enthused
by the explosion of new potential application territories for
secure VC as described in Section 3.
In this paper, first we analyze the primitives that founded
the theoretical base for the recent VC developments. Subsequently, we provide details of VC paradigm along with its
applications, followed by the recent implemented models,
which take this area on the verge of practicality. More precisely, we provide the chronological study and classify the
theoretical domains followed by their utilization in applied
VC paradigm as depicted in Fig. 2.
The contributions of this survey includes:
• a chronological study of theoretical and practical advancements in the VC domain, while describing their
motivations, contributions and overheads;
• a classification of well-known approaches based on
their functionalities, while explaining their general
framework with required conditions;
• a brief summary of different applied VC systems supported by a discussion on the contributions and the required refinements; and finally
• our insights about open challenges for VC, missing
work and possible future fields that would be benefited
by employing VC.
To the best of our knowledge, our contribution is the
first detailed work that provides a comprehensive investigation about the VC domain, while taking into account such a
large scope for classifying the theoretical advancements and
their refined implementations. More precisely, the interdisci-
plinary VC paradigm is actually a blend of various domains
such as complexity theory, cryptography, languages and systems [62]. Therefore, the brief summaries of respective domains are also included in this survey. Although, since the
arising of VC concept, the scholastic community has introduced many nuanced theoretical results, we only discuss the
related contributions that laid the foundation for the VC domain. Moreover, the presented work is useful for both field
audiences, which includes complexity theory or the theoretical advancements, and the other one2) that is concerned about
the VC models and its practical refinements.
The remaining contents of this survey are structured as
follows. Section 2 describes the approaches that are based
on assumptions followed by a subsection of proof/arguments
based approaches. Further, this section presents formalized
definition of VC, reviews different approaches, and discusses
applications of VC. Besides, Section 3 describes the details
of implemented VC systems. More precisely, within Sections 2 and 3, we only review different approaches while comparing their advantages and reporting their overheads. Section 4 provides the summary, possible refinements, missing
work and future recommendations, while Section 5 details
the concluding thoughts.
2
Towards verifiable computation
Section 2 is further divided into three subsections. Subsection
2.1 describes the presented proposals based on assumptions.
We review proof/arguments based proposals in the Subsection 2.2 that laid the foundation for formalized VC. Subsection 2.3 presents formalized VC and its application.
Fig. 2 Chronological study of verifiable computation
2)
The readers belonging to this category may skip Subsection 2.2
Haseeb AHMAD et al. Primitives towards verifiable computation: a survey
2.1 Assumption based approaches
Many researchers, academia, IT industry and vendors of information security community have proposed assumptions
based solutions towards VC as depicted in Fig. 3. Some
among them focused on specially designed trusted hardware,
while the others presented audit based solutions for providing
security and verifying the correctness of computations. Further proposals suggested to make the assumptions of either
replication or attestation.
Fig. 3 Assumptions based VC
Towards assumption based VC paradigm, the utilization
of secure coprocessors is suggested by some researchers in
order to provide hardware based security along with software based schemes. Working towards such solutions for VC,
Bennet [63] emphasized on security of the hardware systems
on which the secure distributed schemes are built for secure
computational operations. He proposed, implemented, and
analyzed a secure hardware module known as secure coprocessor that can be added to PCs and workstations in order to
preserve the data privacy and computational outcomes. The
proposed architecture can bootstrap the cryptographic techniques, may perform statistically check against the vendor
fraud, and could also be operated in a fault tolerant manner.
Still, the major issues with this model are related to development, manufacturing, and maintenance. Another physical
security based solution is proposed by Smith and Weingart in
the form of a secure coprocessor, which was built for utilization on a commercial level [64].
Although the discussed approaches provide implementation in remote environments, the actual limitation is their tamper resistance. These hardware based mechanisms are therefore quite costly and hard to implement. To cope with these
issues, trusted platform modules (TPMs) are nominated as
an international standard for secure crypto-processors [41].
TPMs securely generate the cryptographic keys and offer impenetrable storage along with distant attestation. Although
TPMs are available for commercial purposes on reduced
costs, they have some issues related to the privacy of clients,
and only offer little physical tamper resistance. Moreover, the
integrity and confidentiality of the physical memory space is
5
also not assured by TPMs. Recently, Wen et al. have proposed another architectural based security mechanism for untrusted cloud environment [42], which imposes an access
control policy over the shared resources in order to protect
the virtual machine’s memory integrity. In addition, it provides security for data sharing and inter-processor communications. The proposed mechanism is scalable and efficient
for data transportation; however, it could face some authentication overheads and may only provide security against low
cost physical attacks. In more general terms, trusted hardware based solutions require a high designing and maintenance cost with the trade off of specific computation that do
not cover the scope of VC as a whole. In addition, untraceable
hardware faults could also occur, hence, these solutions are
not much efficient for outsourcing of general purpose computations. Another line of research suggests the attestation
mechanism for verifying the results of outsourced computation. Towards such solutions, Seshadri et al. proposed a software based attestation system called pioneer that employs the
challenge-response mechanism for verification of computation [43]. Although this solution is capable of reporting verification failures during run time, verifier is required to have
the knowledge of prover’s hardware configuration. Similarly,
a wide range of attestation based solutions are discussed in
Ref. [44], however any vulnerability in the software of attestation systems makes these solutions infeasible for VC.
Some assumption based solutions towards VC suggest the
replication of outsourced computation by employing multiple
servers. Working on replication based solution, Malkhi and
Reiter proposed masking quorum systems that ensure data
consistency and availability while tolerating the faults up to
a threshold value [45]. Nonetheless, as the faults exceed the
threshold limits, these systems become insecure, and sacrifice liveness in case of very few reachable servers. In a recent
work, Canetti et al. also put forward a replication based solution towards VC that suggests to outsource the computational
task to more than one SPs while assuming that at least one
among them remains honest [46]. More precisely, collision
resistant hash functions allow the SPs to utilize small commitments in order to commit the large intermediate states.
During the verification phase, verifier checks the inconsistency between the committed intermediate states provided by
the different SPs. Yet, these solutions are only feasible if the
replicated faults remain uncorrelated.
Another assumption based solution submitted by the researchers recommends the audit of outsourced computation.
Monrose et al. proposed a three level (compile, run and verify) framework towards remote auditability of outsourced
6
Front. Comput. Sci.
computation [47]. During compiling phase, the computations
are augmented with the code that generates the state points.
These components are then sent to workers of SP for execution of the task, generation of proof and the final state. The remote auditor has the responsibility to check proper execution
of the second phase. Finally, these results are sent to verifier
for checking the correctness of transactions during computation. Although the presented scheme is heuristically secure,
but its audit (trace) cost grows linearly with the number of
executed instruction, and the payment mechanism is not discussed. Working towards payment issues regarding rewards
(in case of correct computation) and fine (in case of cheating), Belenkiy et al. proposed audit mechanisms for motivating the outsourced computation [48]. The authors suggested
to double check the computation either by the authority itself or by hiring multiple SPs. Nonetheless, these solutions
are impractical for resource constrained clients, as these audits necessitate the clients to recompute a small portion of
the computations accomplished by untrusted SPs. Moreover,
these solutions follow the assumption that some workers must
be non-colluding and honest.
Unfortunately, the aforementioned solutions only work for
specific computations and necessitate a trusted chain, while
assuming that the hardware systems and all the other stake
holders operate correctly in their respective domains. Hence,
till now they are not the efficient candidates to be adopted
within the VC domain.
2.2 Proofs/ Arguments based approaches
The following subsection is further divided among interactive proofs, zero knowledge (ZK) proofs/arguments, probabilistic checkable proofs (PCP) and computationally sound
(CS) proofs based solutions towards VC.
2.2.1 Interactive proof based approaches
Interactive proof systems consider a scenario in which the
polynomial time verifier (client) communicates with the powerful but untrusted exponential time prover (worker), in order
to verify the correctness of the output of computations returned by prover. Interactive proof systems have to fulfill two
essential requirements that are stated as follows:
• Completeness If the statement returned by prover is correct, verifier will accept it.
• Soundness If the statement returned by prover is incorrect, verifier cannot be convinced by prover, except with very
small probability.
In 1985, Goldwasser et al. were the first to introduce inter-
active proof class IP[ f (n)] with f (n) rounds and n input size
for cryptographic primitives [7]. These interactive proof system is probabilistic in nature. The working mechanism for
IP[ f (n)] is that in each round verifier performs some computation and sends a message to prover, after performing his
computations, prover returns information to verifier. Eventually, verifier decides whether to accept or reject the results. In
the same symposium, Babai proposed Arthur-Merlin (AM)
class — a new combinatorial games based model for interactive proofs [8]. In his model, the king Arthur is considered as the probabilistic verifier who can have random moves
during game, whereas Merlin is reported as a prover with
unbounded resources whose game moves could be optimal.
Here, Merlin tries to convince Arthur that a specific string x
belongs to a language L. Where, total number of moves and
total length of printed string is called the length and size of
the game, respectively. As required by the efficiency measures, the length, size of the game and the referee’s running
time must be polynomial bounded to the length of string x.
Throughout this procedure, the role of referee is to check
the input, moves and finally to announce the winner of the
game. In the same paper, Babai also presented a new class
AM[ f (n)], similar to that of IP[ f (n)]. AM[ f (n)] class has
to fulfill an extra condition that verifier is bound to provide
all the randomly used bits to prover, which are subsequently
used during the computation process. In such case verifier has
nothing to hide from prover, so the powerful prover (with the
known random bits provided by verifier) can simulate every
computation that verifier does during the computation. Because of this reason AM[ f (n)] protocol is called public coins,
whereas IP[ f (n)] protocol is called private coins. Working on
these lines, Goldwasser and Sipser upgraded the public coin
protocol while modifying the ability of verifier to hide the
random bits that are used during the computations [18]. In
modified private coins protocol, only two more rounds were
added to recognize the same language while providing privacy to verifier’s random bits. This modification made the
public coins protocol approximately equivalent to the private
coins protocol. Subsequently, Goldwasser et al. proposed another interactive model (similar to [8]), in which polynomial
time verifier adopts randomization strategy and prover works
in an adaptive manner [7]. However, this model offers the
soundness unconditionally. Furthermore, in the same paper,
a new notion of knowledge complexity was also introduced
by the authors. The knowledge complexity actually measures
the computational gain achieved through interactive process.
A new concept of multi provers for interactive proofs was
proposed by Ben-Or et al. for the first time in 1988 [19].
Haseeb AHMAD et al. Primitives towards verifiable computation: a survey
The authors presented the system in which two untrusted and
computational unbounded provers jointly agree to convince
verifier about the correctness of the statements. The only restriction is that when the communication starts during the
rounds, provers cannot collude with each other. Three years
later, Babai et al. presented incredible proof MIP = NEXPTIME [20]. The key idea is to solve a large class of solvable
problems in exponential time by using nondeterministic machine.
At that time it was the general perception that interactive proofs are only a minor extension of NP class, but the
proof IP = PSPACE provided by Shamir brought a revolutionary conceptual change in the research community [15].
Informally, the author assumed the interaction and randomization to provide an astonishing result stated as the proofs
that can be generated with polynomial space and can be verified in polynomial time. Subsequently, Lund et al. utilized
algebraic techniques for constructing interactive proof systems [16]. Their proposed scheme is utilized for proving that
every language for polynomial class is having an interactive proof. Further, Fortnow et al. provided that a polynomial (time and space) verifier within a public coin interactive protocol accepts the same language sets that are accepted
by a deterministic machine utilizing polynomial space [17].
This consequence is conceptually equivalent to IP = PSPACE
class. Though, the works of [15–17] share a common drawback that even an honest prover has to carry out the computations in non-polynomial time.
All of the previous works have studied the expresiveness
of interactive proofs with the assumptions of various restrictions such as rounds, verification time, randomness, depth or
space. However, the complexity of prover has not taken into
account more deliberately. Similarly, most of the preceding
cryptographic protocols involving interactive proofs bound
all of the participant to run in polynomial time. In these settings, provers use the auxiliary secret information for proving
tasks instead of generating the proof.
Goldwasser et al. revisited the interactive protocols for the
tractable languages and proposed a public coins interactive
approach (muggles approach), in which they considered an
efficient and polynomial time (honest) prover along with a
super-efficient verifier, whose complexity is almost linear to
the input size and polylogarithmic to the computation of language [2]. The approach could be applied to a language that
is computable by uniform log-space Boolean circuits. Given
a log-space Boolean circuit, the protocol allows a prover to
run in polynomial time to the circuit size for evaluating the
circuit correctly and a verifier to run in logarithmic time
7
span. Further, the complexity of interactive rounds is polylogarithmic. The protocol provides soundness unconditionally but the computational time of prover and verifier is still
high. Although this approach is efficient for quasi linear functions that can be computed by a arithmetic circuit having
small depth, it is not efficiently applicable to fine-grained scenario as this requires regular wiring patterns or regular circuits [55, 65]. Furthermore, for restricted class of functions,
they also proposed a non-interactive argument. In a subsequent work, Rothblum et al. investigated interactive proofs
with sub-linear verifiers, while introducing the notion of interactive proofs of proximity [66]. More precisely, the sublinear time verifier (with a query access to input) is allowed to
verify that computed results are approximately correct. Moreover, there is no need of pre-processing setup, and the authors
takes advantage of the fact that most of the bits are never
read by the sub-linear verifier. In particular, the underlying
work is actually a parallel repetition of Ref. [2]. Although,
this proposal reduces the verification cost, the communication increases significantly due to the repetitions.
Beyond the concepts of complexity and cryptographic theories, a recent emerging field VC has a pledge to incorporate
interactive proofs. For instance, as the resource constraint
user outsources the heavy load of computation to some untrusted SP with the utmost requirement of getting back results of correct computation. The SP performs the required
computation and sends back results to the user along with
a valid proof of correct computations. Now, the user needs
to check the validity of proof in much less time as compared to perform the computations by itself. Further, SP is
required to complete this endeavor in the limited time. In this
scenario, interactive proofs serve as a solution. The user is
considered as verifier, while the SP serves as prover. Thus,
prover tries to convince that computations are performed correctly within polynomial time. Working on these lines, to outsource bounded depth computations with efficient verifiability, Goldwasser et al. put forward a single-round interactive
argument [67]. The soundness of underlying argument system is based on the existence of private information retrieval
scheme with poly-logarithmic communication cost.
Another work presented by Blumberg et al. utilized the
model of [55] as a base for constructing an applied system of
multi prover interactive proofs [68]. In addition, the authors
also presented a built system to check performance metrics
of VC, named as Clover, that produces provers and a verifier when provided a C (language) written code. Formally,
this system is applied to schemes that are based on irregular
circuits, and yet not efficient enough to work with general cir-
8
Front. Comput. Sci.
cuits. Besides, it requires preprocessing, hence, it is currently
suitable only for batch of computations, so that amortization
could reduce the setup cost of verifier. Further, it does not
yet support the reduction of higher level programs into the
circuits while preserving the complexity.
The major issue with interactive proofs is the huge difference between computational powers of prover (superpolynomial) and a weak verifier (polynomial). Therefore, the
potent prover could convince the resource constrained verifier about the correctness of the statements that otherwise
could not be computed by verifier himself. However recently,
modified variants of interactive proofs (aforementioned) are
widely being used as basic tools by the researchers for the
construction of theoretical and proof based implemented VC
systems.
• Note Albeit of many nuanced results presented within
the technical domain of interactive proofs, we discussed only
the ones that provided the base for the construction of the
VC protocols. For a more detailed description of interactive
proofs, the interested readers are referred to [67, 69].
2.2.2 Zero knowledge based approaches
Zero knowledge (ZK) proof systems consider a scenario, in
which prover convinces verifier that statement is true without divulging any related information. ZK protocol consists
of three passes during communication. First pass is the message termed as witness (commitment) that is sent from prover
to verifier; the second message is a reply named as challenge
from verifier to prover, and the final one is known as response
that is sent from prover to verifier. Then verifier checks the
validity by comparing witness and response values, if the response matches the expected value, verifier accepts it. In addition, if prover and verifier follow the ZK protocol properly,
they are called honest. Anyone among them, who does not
pursue the protocol properly is known as cheating (prover/
verifier). ZK protocols are actually probabilistic in nature and
have to fulfill three essential properties described as follows.
• Completeness In case of true statement, the honest verifier will be convinced by the honest prover.
• Soundness In case of false statement, the honest verifier
will not be convinced by the cheating prover, except with that
of little probability.
• Zero knowledge In case of true statement, the cheating
verifier cannot learn anything apart from the integrity of the
statement.
The zero knowledge property is achieved by assuming that
a cheating verifier possesses a simulator who has no access
to prover. Providing only the statement (to be proved) to such
simulator can provide a simulated result (distribution) that
seems alike to that of interaction between the cheating verifier
and prover. The zero knowledge property makes ZK concept
distinct from that of the interactive proofs.
Many variants of ZK have been submitted by the research
community among which (perfect, statistical and computational) ZK proofs earned the supreme importance [7,23]. Perfect zero knowledge (PZK) proofs are the ones in which the
execution of actual proof protocol and the simulated protocol is exactly the same. More generally, PZK proofs actually leak no information. In statistical zero knowledge (SZK)
proofs, the actual and the simulated distributions are statistically close (instead of being exactly the same) and these protocols leak only negligible information. Computational zero
knowledge (CZK) proofs refer a weaker scenario in which the
actual and the simulated distributions are indistinguishable
only in polynomial time. In this case, the leakage amount of
information to polynomial verifier is negligible. The former
two notions are statistically close and comparatively stronger
than the later one, as they do not bound the computational
powers of verifier, and also they either leak zero or negligible information only. Moreover, SZK proofs provide information theoretic security and are built on strong intractability
assumptions as compared to the CZK proofs. Furthermore,
SZK proofs should be preferably utilized over SZK arguments and CZK proofs [70].
Goldwasser et al. considered the idea of ZK proofs for
the very first time in 1985, and also provided the ZK proof
for the quadratic residuosity problem [7]. Although this was
a remarkable contribution, their definition undergoes some
limitations. Following their definition, in general, it cannot
be proved that composition of ZK proofs also yields a ZK
proof. However, restricting prover up to some bounds, their
definition holds the constant number of sequential composition. Furthermore, employing their proposed model, a dishonest verifier can compute his message before the beginning of proof protocol by utilizing the knowledge acquired
from previous phase, in which ZK proof could be employed
as sub-protocol. Later, the shortcomings of ZK approach presented in [7] were investigated by Goldreich and Yair [27].
To overcome the issues, the authors submitted two concepts:
auxiliary input ZK and blackbox simulation ZK. The motivation behind the auxiliary input ZK approach is that a cheating verifier having the auxiliary information, cannot extract
any additional information while communicating with prover,
which it cannot compute by itself. Furthermore, this concept
is closed under the sequential composition of ZK proofs only
Haseeb AHMAD et al. Primitives towards verifiable computation: a survey
for one execution and fails to be ZK for two sequential executions. In blackbox simulation ZK, a simulation of interaction
between prover and the polynomial time verifier (V∗) is counterfeited by a polynomial time simulator(M) in which V∗ is
used as blackbox by M.
Proceeding on similar lines, Goldreich et al. presented the
applicability and generality of ZK proofs [21]. They showed
that assuming the existence of any commitment scheme (either encryption or by hiding information through some physical measures), ZK proofs can be constructed for all problems
in the NP sets. Though, this approach is not efficient as it
invokes constant round protocols for variable (non-constant)
number of times. Besides, without making any assumption
they also proved that graph non-isomorphism has ZK interactive proofs. Fortnow studied the computational complexities
of PZK proofs, and provided the implications of boundedround interactive proof for the complement of a language due
to the existence of PZK proof system for that language [23].
Aiello and Hastad further proved that if any language has
a ZK proof, it also has a two-round interactive proof [24].
Moreover, every NP set has a SZK argument under standard
intractability assumptions, but not all the NP sets have SZK
proofs [23, 24].
The application perspective of ZK proofs was first visited
in 1988 by Feige and Shamir while providing the identification schemes [25]. In their schemes, the proving parties are
identified by proving the possession of their knowledge instead of validating the assertions. Furthermore, based on ZK
proof for quadratic residuosity [7], Fiat and Shamir presented
signature and identification schemes without utilizing public or private keys [26]. Ben-Or et al. assumed a secure (unbreakable) encryption scheme, and provided the proof that
any language that confesses an interactive proof also admits
ZK proof [22]. In the same paper, the authors assumed envelops for a bit commitment scheme, and provided another
implication of PZK interactive proof.
The imperative efficiency measure of complexity for a protocol is the total number of rounds. More precisely, constant
numbers of rounds are desirable for a protocol to be more
efficient. It is remarked that under standard intractability assumptions, constant round ZK proofs exist for all the languages sets in NP. Working on this research direction, Feige
and Shamir assumed the existence of one way functions to
present the implication of constant round ZK interactive arguments for all NP languages [28]. Although the adopted intractability assumption considered by them was sole, but this
theoretical concept was hard to be implemented for practical purposes in its presented form. In addition, their protocol
9
provided soundness only against computationally bounded
prover. This shortcoming was noticed and improved by Goldreich and Kahan by providing unbounded soundness [29].
However, the authors achieved this result by utilizing a more
harder assumptions known as claw free functions (Discrete
Logarithm Problem or Blum Integers Factoring Problem) for
proposing the existence of the constant round ZK proofs of
knowledge.
Another line of research in the field of ZK proofs is noninteractive zero knowledge (NIZK) proof system that was
proposed by Blum et al. for the first time in 1988 [71]. The
NIZK proof system is based on common reference string
(CRS) that is generated by trusted SP, and is accessible to
prover as well as verifier. The prover can only send one message to verifier that is accepted or rejected based on final decision. The authors also provided a bounded NIZK proof system for 3SAT in the same paper. Dror and Shamir assumed
the existence of one way permutation, and constructed publicly verifiable NZIK proof for any language of NP [72] for
the first time in 1991. In their system, both prover and verifier
share a common CRS that is mapped on the sequence of matrices containing Hamiltonian matrix with very high probability. The prover sends the permutation and its original entries
to convince verifier the Hamiltonicity, and verifier accepts the
proof if he finds the revealed entries equal to zero.
The concepts of zero-knowledge sets and its yielding result zero-knowledge (elementary) databases were firstly formalized by Micali et al. in 2003 [31]. The intuition behind
zero-knowledge sets is to enable the polynomial time prover
to choose a finite set with finite strings for making some commitment by just posting an easily computable short message.
Afterwards, given a random sequence of strings relative to
the same commitment in non-interactive manner, and without
revealing any undue information about the set, prover could
prove whether any among provided strings belongs to that finite set or not.
Actually at this stage, prover posts an additional easily
computable and short proof, the correctness of which can be
simply and publicly verified by utilizing the public random
string. Zero-knowledge sets work for maintaining the privacy
of prover side only and offer no privacy to the client; also
because of being static, it is not possible to update these sets
easily. In zero-knowledge (elementary) databases approach,
prover commits an elementary database and on providing a
sequence of binary strings (a key) it proves whether the specific random string exists within the database or not. If yes,
it determines what its corresponding value is. Throughout
this process, the zero knowledge property is maintained by
10
Front. Comput. Sci.
prover, i.e., it does not reveal any undue knowledge.
Proceeding with non-interactive research line, Groth et al.
in 2006, provided the first ever perfect NIZK system for any
language in NP [30]. They modified only the CRS for converting the encryption scheme into a perfectly hiding commitment scheme that transformed their constructed NIZK
system into a perfect NIZK argument system for SAT. Although this is a remarkable argument system, which accepts
all the languages in NP, the implication is that their construction offers adaptive soundness only for limited sized circuits.
In the same paper, the authors also built a NIZK protocol
that was UC-secure even in the presence of adaptive adversaries. Further, relying on pairing based groups, Groth recommended another sub-linear sized NIZK argument system
with efficient public verifiability [73]. The author assumed
the existence of q-CPDH and q-PKE for providing a perfect
(w.r.t. completeness and zero knowledge) and computationally sound argument system. In fact, he suggested a way to
compress the satisfiability proof of Boolean circuit by arithmetizing it while using only constant number of group elements. He chose a circuit adaptively, hence in his model,
prover computational cost as well as the CRS grows quadratically with respect to circuit size, while verifier computation
costs is linear that could be further condensed down to constant in an amortized manner. Besides, the author introduced
a new concept known as product argument (provision of commitments to x, y and z holds that zi = xi yi ) while providing a
proof of perfect witness indistinguishability and perfect completeness as well. One can utilize the powerful approach of
Groth in providence with appropriate product argument to
construct an efficient NIZK argument for many other languages. Furthermore in the same paper, a restriction argument is also put forward by Groth, in which prover’s goal is
to prove verifier that the committed vector has some entries
having value equal to zero.
The starting point of road map towards succinct noninteractive arguments of knowledge (SNARKs) is the work
of Kilian [12], in which he utilized PCPs for constructing succinct interactive arguments for NP. These arguments
are actually the proof systems with computational soundness, and are considered as succinct due to the polylogarithmic communication complexity. For making these arguments non-interactive, Micali employed a random oracle model, in which prover is required to apply a hash
function over its PCP string for the generation of verifier’s
PCP queries. In order to remove random oracle from Micali’s construction, Bitansky et al. introduced the notion
of extractable collision-resistant function (ECRH), which
is a usual collision-resistant hash function with an additional extraction function. ECRH is secure based on nonfalsifiable but a valid assumption: an extractor can compute a pre-image, who is allowed to observe an algorithm
that computes an image of the ECRH. This ECRH based
construction is known as succinct non-interactive arguments
(SNARGs), and further under the proof of knowledge property is called succinct non-interactive arguments of knowledge (SNARKs). Basically, SNARGs allow the verification
of NP statements with much reduced complexity as compared to classical NP verification procedures. Working towards SNARKs and their application to distributed computing for providing correctness to dynamic computation by utilizing proof-carrying data (PCD), Bitansky et al. presented
an efficient way to transform SNARKs with preprocessing
into SNARKs without preprocessing, and also to transform
SNARKs into PCD system [74]. Further, as many applications of SNARKs and PCD systems necessitate to ensure the
input or function privacy that can be achieved by embedding ZK with these primitives. Working towards this line,
the authors proposed ZK-SNARKs construction that is based
on ECRH, and the knowledge of exponent assumption while
the SNARKs transformation preserves the ZK property. Basically in SNARKs, ZK property ensures that the honest prover
can generate plausible proofs for the true theorems without
divulging any information related to the witness. In addition,
they apply their transformation to Groth’s arguments [73]
to present a preprocessing free publicly verifiable SNARKs
and PCD in the plain model. Recently, following the approach of [74], Ben-Sasson et al. provided the implementation of ZK-SNARK that runs random access machine for
producing the verification proof of correct computation [75].
The authors proposed the utilization of suitable elliptic curves
for bootstrapping in such a way that the construction takes
collision resistance hash function and a preprocessed ZKSNARK as input for outputting a scalable ZK-SNARK. Although, verifier’s setup cost is independent of computation
length, it is still proportional to its checking setup in constraints. The prover memory requirement is also independent
of computation length, but its computational cost is several
times higher than that of other implementations, such as those
of Pantry and Buffet. Besides, the authors also implemented
single predicate PCD system for arithmetic circuits, but the
predicate follows the rigid structure (i.e., same input/output
length and accept fixed amount of messages). Chiesa et al. extended the work of Ref. [75] to present multi predicate PCD
system, in which each predicate is assigned with a distinct
node [76]. Although this proposal relaxes the rigidness, the
Haseeb AHMAD et al.
Primitives towards verifiable computation: a survey
11
multiple predicates make the arithmetic circuit more complex
for composition of the recursive proof.
of aforementioned NIZK arguments can be improved by using the improved product argument of Lipmaa.
In response to the questions that whether there exist some
alternative mechanisms to construct better SNARKs without
explicitly utilizing PCPs? Or is there some suitable characterization of NP that would support the cryptographic applications in a better way? Gennaro et al. introduced a new
class of the NP complexity known as quadratic span programs (QSPs), which is an extended form of (linear) span
programs (presented by Karchmer and Wigderson) [77]. Informally, a span program accepts an input x if and only if
the target vector can be spanned by a linear combination of
vectors belonging to x. As an extension, a QSP accepts an
input x if and only if either the target polynomial or its multiple can be spanned by a polynomial multiplication of two
linear combinations of vectors belonging to x. Unlike span
programs, any efficiently computable function can be efficiently computed by QSPs. Thus, the authors utilized QSPs
for constructing NIZK Circuit-SAT argument by employing
CRS model. In their argument, the Circuit-SAT is composed
while utilizing only seven group elements. In addition, prover
computation and the CRS size are linear and verifier computation is linear w.r.t. input size. Although their argument
system is efficient in terms of short proof size as well, but offers non-adaptive soundness and provides a weak wire checking mechanism for verifying the consistency. Working on the
same lines of research, Lipmaa utilized a combination of
span programs and Reed-Solomon error-correcting codes for
providing NIZK argument with improved wire checker and
constant verifier computation time [78]. Besides, the author
utilized universal circuit for providing a proof of adaptive
soundness. Fauzi et al. introduced a shift argument (provision of commitments to x and y holds that x is a coordinateshift of y) and provided an improved non-interactive product
argument [79]. Based on these arguments, the authors put forward many NIZK arguments such as for Decision-Knapsack,
Subset-Sum Set-Partition, and NP complete languages with
linear verifier’s computation, constant communication and
sub-quadratic prover computation. Although their arguments
are quite efficient as compared to the previous ones, prover
computation is high that needs to be addressed. In his recent
paper [80], Lipmaa further improves the product argument by
taking advantage of QAP based blueprint provided in [77], instead of using progression-free sets (hard problem in additive
combinatorics). Employing this improved version of product
arguments, the author also provides an adaptive NIZK argument with less computational complexity. Moreover, as faster
product arguments produce efficient arguments, hence, many
The major issue with the ZK proofs is the extra powerful prover, therefore, the powerful prover could convince the
weak verifier about the correctness of the statements that otherwise could not be computed by verifier himself. In addition,
ZK variants still face high verifier and prover complexity, and
are hard to build for concurrent execution of multiple protocols. Nonetheless, due to the tradeoff of better security, modified variants of ZK proofs (aforementioned) are widely being
used as basic tools by the researchers for the construction of
theoretical and proof based implemented VC systems.
• Note Albeit of many nuanced results presented within
the technical domain of ZK proofs/arguments, we discussed
only the ones that provided the base for the construction
of the VC protocols. For a more detailed description of
ZK proofs/arguments, the interested readers are referred to
[69, 70].
2.2.3 Probabilistic checkable proof based approaches
Probabilistic checkable proof (PCP) systems consider a scenario in which a polynomial time randomized algorithm
(verifier) has some restriction on randomness (r(n)), and is
bounded to read limited number of bits (at most q(n)) of the
proof. Thus, a verifier is provided with an input x and a membership proof σ has to accept the correct proof and reject the
incorrect proofs with a very high probability. Limiting to the
aforementioned restrictions, PCP systems have to fulfill two
essential requirements that are stated as follows:
• Completeness In case of correct proof, verifier will accept
every random string with probability 1.
• Soundness In case of incorrect proof, verifier rejects all
the random strings with probability more than 1/2.
The issue with interactive proofs is the huge difference between the computational powers of prover (superpolynomial) and a weak verifier (polynomial). Thus, making verifier as efficient as possible is the dire need of that
time. To address this very issue, the research community introduced the notion of PCP. In actual sense, the equivalence
MIP = NEXPTIME laid the foundation for the future work of
PCP theory [20]. Although the significance of PCP machinery was described earlier [32], but Arora and Safra explicitly
defined PCP for the first time in 1998 [10]. In Ref. [9], Babai
et al. proposed to bind prover for writing the proof using error correcting code; in that case verifier could check all NP
languages in poly-logarithmic time. These proofs are called
transparent proofs (holographic proofs). Although in transparent proof it is easy to verify the proof, prover, but needs
12
Front. Comput. Sci.
to work hard for constructing the proof. Feige et al. proposed
a less hindering proof system in which the assumption for
inputting in error correcting format is eliminated. Although
in their system, the complexity of verifier is still polynomial
time, but the complexities of randomness and queries are reduced down to poly-logarithmic time. Taking the advantage
of PCP theory, many results were restated, such as the findings of Ref. [20] could be stated as NEXP = PCP(poly, poly);
the result of Ref. [9] is restated as NP ⊆ PCP(log, polylog),
and that of Ref. [32] can be restated as NP ⊆ PCP( f (n), f (n)),
where f (n) = log n × log log n. Furthermore, Arora et al.
proved the PCP theorem (NP = PCP(O(log n), O(1))), and
presented an important equivalence NP = PCP(log, f (n)),
where f (n) = o(log n) [11]. The complexity classes discussed
in our survey are presented in Fig. 4.
Fig. 4 Complexity classes
As the proof of PCP machinery could be very long to
be processed by verifier, hence in order to deal with this
complication, Kilian investigated a new ZK argument that
is founded on transparent proofs, as its base for providing
soundness [12]. This work provides a mechanism to transform PCP into zero knowledge PCP, and then to zero knowledge arguments. In his arguments system, the communication complexity is polylogarithmic and prover utilizes a binary (Merkle) tree hashing technique for providing the virtual access of PCP proof to verifier. In another work, utilizing the efficient characteristics of different PCP systems,
Kilian proposed a hybrid argument construction [13]. In this
work, committed bits based zero knowledge proofs, hashing
of Merkle tree and transparent proofs are employed. Besides,
recursive steps are added to the transparent proofs for reducing complexity overhead linear w.r.t some security parameter
during communication. Although in this protocol, the numbers of random bits for the proofs are reduced, still the utilized random bits are more than many other efficient protocols.
All the efficient proposals thus far used follow two stages:
1) classical proof was converted into an encoded proof (a
polynomial sized PCP string); 2) to commit the proof, a
tree based hashing technique was applied, and then verifier
chose the small set of bits. In these arguments, the expansion of classical proof leads to a notable amount of redundancy, whereas, later it shrinks vividly. The need of the hour
is to find some shortcut for merging these two steps into a
single. Working with this issue, Ishai et al. proposed an alternative way for compiling long PCP by using a linear oracle function instead of polynomial sized PCP string [81].
The domain of the function could be exponentially large, but
the overall evaluation time was polynomial. The authors provided a compiler that converted PCP machinery to an argument system. The communication cost from verifier to prover
is polynomial, while from prover to verifier is constant encrypted field elements. However, inefficient Hadamard PCP,
takes the computation complexity of prover and verifier (in
pre-processing) up to quadratic to the size of proof. Furthermore, the authors also employed this cryptographic commitment based PCP construction for verifying outsourced computations.
Kalai and Raz further investigated the PCP theory, and proposed a notion termed as interactive PCP [33]. Using an interactive proof, the model offers an efficient verification of
the proof string that can be done by only reading a single bit
of the proof. In this model, the authors suggested to use an
additional interactive verification step to previously known
PCPs for getting short PCPs. They also provided that by using some properties, interactive PCPs can be converted into
probabilistic checkable arguments [34], ZK interactive proofs
and commit reveal schemes as well. In addition, as an application they also provide interactive PCP for various arithmetic formulas. Proceeding with an alternative manner, based
on ECRH, they modified the scheme [82] while providing it
soundness. In fact, the authors took advantage of three well
known approaches namely PCP, private information retrieval
(PIR) [35] and Merkle tree. The overhead of this approach
is the nonstandard assumption (knowledge of exponent), and
also like many others the verification process is dependent
on the circuit’s depth. Another work related to the outsourc-
Haseeb AHMAD et al.
Primitives towards verifiable computation: a survey
ing of the computation verification process to the honest but
curious verifiers is presented by Xu et al. [83]. Their work
is comprised of three phases. In the first phase, an additive
homomorphic cryptosystem and an arithmetic circuit is utilized for getting the results of outsourced computation. In the
second phase, the commitments are constructed by the client
and decommitment procedure is completed by verifier nodes.
In the final phase, verifier performs required operations and
asks queries from prover to complete the verification process
in a linear PCP manner. After verification, verifier instructs
the client to accept or reject the results of outsourced computation provided by prover. Setty et al. reviewed the previous
PCP machinery in search of an approach based on which a
practical implementation could be proposed [84]. They found
that the argument approach of Ref. [81] has the appropriate
capabilities that could lead towards practicality. Though, the
authors propose a high-level sketch without presenting a concrete protocol, an implementation, an evaluation, or proofs.
The major issue with PCPs is very high setup cost, hence,
they still face high verifier and prover complexity and are
hard to build. However, because of the tradeoff of better
approach, modified variants of PCP (aforementioned) are
widely being used as basic tools by researchers for construction of proof based implemented VC systems.
• Note Albeit of many nuanced results presented within the
technical domain of PCPs, we discussed only the ones that
provided the base for the construction of the VC protocols.
For a more detailed description of PCPs, the interested readers are referred to Refs. [69, 81].
2.2.4 Computationally sound proof based approaches
Proceeding with the ZK proofs and PCPs, Micali brought
forward a new notion termed as computationally sound (CS)
proofs [14]. Efficient verifiability (i.e., the verifying complexity should be less than accepting), efficient provability (i.e.,
the complexity of prover should be close to accepting) and
recursive universality (i.e., in any semi-recursive language,
the membership should be efficiently provable) are the three
major goals assumed to be achieved by CS proofs. In fact, in
CS proof systems it is easy to compute the proof for a true
statement while hard or almost impossible to compute for
false one. This property is unlike the former traditional proof
systems in which it is required that all deterministic or probabilistic provable statements are true and the proofs of false
statements are not allowed. In order to eliminate the interaction overhead between prover and verifier, random oracle
model or feasible cryptographic assumption is adopted in CS
13
proof system. Therefore, CS proofs are non-interactive and
publicly verifiable in nature. The prover issues a CS certificate that can be regarded as a compressed string of a longer
deciding computation. In case of cryptographic assumption
for CS proof, any computation related to NP complete can
be checked in polynomial time. In CS proof system, it is required to pay high price in terms of complexity while reverting the random oracle, which is a major overhead to be dealt
with. Working on the lines of Kilian and Micali, a new notion named as universal-arguments was submitted by Barak
and Goldreich [39]. These arguments are based on standard
hardness assumption that works for polynomial sized circuits.
The nature of these arguments is similar to that of public coin
protocols and computation requires only constant rounds. In
addition, the authors provided that existence of universal arguments, and existence of zero-knowledge arguments (nonblack-box) are implications due to the existence of standard
collision resistant hash functions.
Inspired from Kilian’s arguments and CS proofs, Goldwasser et al. introduced a designated verifier CS proof system
[40]. In this system, the authors proposed that only a designated verifier who publishes the public-private keys pair can
verify. In this sense, the system is not publicly verifiable and
hence relatively weaker as compared to that of CS proofs proposed by Micali. Furthermore, based on the existence of fully
homomorphic encryption (FHE) [38] and Decisional DiffieHellman (DDH) hardness assumptions, the authors proposed
two delegation schemes utilizing their designated CS proof
systems [40]. The first scheme is non-interactive one consisting of one offline stage, and can compute any polynomial time function. However, this scheme is having instance
based complexity, and faces the problem of verifier rejection as well. The second scheme is an interactive one without any offline stage, but having the online stage only. This
scheme deals with verifier rejection problem, and also provides soundness utilizing verification oracle. Another work
based on Micali’s CS proofs is put forward by Chung et al.
that presented the construction of schemes that allow the delegation of any function in the class P [37]. Nonetheless, these
schemes are either interactive or non-interactive in the random oracle model.
The major issue with CS proofs is the very high verifier
and prover complexity and they are hard to implement. However, because of the tradeoff of efficient verifiability, efficient
provability and recursive universality, modified variants of
CS proofs (aforementioned) are widely being used as basic
tools by the researchers for construction of proof based implemented VC systems.
14
Front. Comput. Sci.
• Note Albeit of many nuanced results presented within the
technical domain of CS proofs, we discussed only the ones
that provided the base for construction of VC protocols. For
a more detailed description of CS proofs, the interested readers are referred to Refs. [14, 40].
2.3 Formalized verifiable computation
Although Kilian [13] and Micali [14] submitted their works
towards non-interactive verifiable computation implicitly, all
of the previous constructions as well as these were either
based on proofs or arguments. Moving slightly to a different domain, the notion of non-interactive verifiable computation was explicitly formalized by Gennaro et al. for the first
time in 2010 [1]. VC enables the businesses and computationally weak clients to outsource their expensive computational tasks, such as the evaluation of a function f on specific
data xi , x2 , . . ., xn provided dynamically to powerful but untrusted SPs. The workers of SPs after computing, return back
the result yi = f (xi ), with a verifying proof to justify that the
computation was performed correctly on the provided values
xi . The verification process must be done with less computational work than what is required for evaluating f (xi ) from
scrap. For instance, if t is the total time it takes to outsource
the function f (having input and outputs of length n) and
to verify its results, then client should work in polynomial
w.r.t n size, and polylogarithmic to time t, while workers (SP)
should work within polynomial time t [85].
More specifically, a VC scheme is comprised of three
stages [1]:
• Preprocessing stage The client needs to undergo this stage
once for the calculation of auxiliary information related to the
function f . Public portion is provided to workers, and private
information is preserved secret by the client.
(KeyGen, ProbGen, Compute, Verify), and described as
follows [1]:
• (pk, sk)←KeyGen( f , 1λ ) The randomized key generation algorithm takes the function f (to be outsourced),
and a security parameter λ as input, it generates a public
key pk that encodes f , and is sent to worker for computing f . It also generates the corresponding private key sk
that is possessed secret by client.
• (σ x , τ x )←ProbGensk (x) The randomized problem
generation algorithm outputs a public value σ x by encoding the input x using private key sk. The public value
σ x is provided to worker for computing with, and client
keeps secret the corresponding private value τ x .
• σy ← Compute pk (σ x ) The worker algorithm uses pk
and σ x , and outputs an encoded version σy of output
y = f (x).
• (y, ⊥)←Verifysk (τ x , σy ) The verification algorithm
uses secret value τ x , and private key sk to decode σy .
It outputs y = f (x), if the decoding result of σy denotes
a valid output. Otherwise it outputs ⊥.
Correctness, security and efficiency are the essential properties that a VC scheme needs to satisfy. These properties are
defined as follows:
• Correctness The correctness requirement assures that if
worker performs the computation honestly, then output result
will be validated during verification check.
• Security A VC scheme is said to be secure, if a dishonest
worker could never be able to convince verifier to accept an
invalid (incorrect) result of outsourced function f computed
on provided input x.
• Input preparation stage During this step, public and private information related to the input xi of function f is computed by client. Public portion is provided to workers, and
private information is preserved secretly by client.
• Efficiency The efficiency property guarantees that outsourcing of problem generation, and verification of the proof
must be done with less computational efforts (in terms of time
and complexity) than that required for computing f (x) from
scratch.
• Output computation and verification stage The workers use the publicly available information, and compute an
encoded string σy that comprises of value y = f (x), and returns the computational result to client. Utilizing the private
information related to function f and its input xi , client decodes the result σy returned by worker. After decoding, the
client verifies the results of outsourced computation.
Recently, Fiore et al. added two more properties termed
as function privacy and adaptive security to the generic
model [52]. Function privacy makes the protocol able to hide
the function (from server or worker) that needs to be computed by worker. Adaptive security guarantees the scheme to
be secure even in the presence of adversary, who is allowed
to obtain the encoded input even before choosing function.
2.3.1 Problem definition (VC):
Many extensions of VC schemes are suggested by research
community during current era. Parno et al. formally extended
the notion of VC to public verifiable computation protocol
VC scheme comprises of four algorithms, i.e., VC =
Haseeb AHMAD et al.
Primitives towards verifiable computation: a survey
by adding the properties of public delegation and public verifiability [86]. Public delegation allows arbitrary clients to
outsource inputs for delegation of computation, and public verifiability allows delegators as well as arbitrary clients
to check the correctness of result provided by deterministic
worker, while maintaining the properties of security and efficiency. Besides, the authors also introduced multi-function
VC scheme, in which evaluation of different functions can be
performed on a-priori fixed inputs. Papamanthou et al. nominated a new paradigm known as signatures of correct computation (SCC) [87]. In SCC, client delegates some computation and sends a public key related to the function to untrusted
server. In response, server sends back results of computation
along with signature that assures the correctness of computation of function. Choi et al. presented a new variant of VC for
the first time in 2013, known as multi-client verifiable computation (MVC) [88]. The proposed protocol considers scenario where multiple clients outsource their computation to
an untrusted server, in order to compute f on their collective
input. In MVC, the authors prove soundness against untrusted
server, and privacy against a malicious client that may collude
with the server. Apon et al. formalized the notion of verifiable
oblivious storage that allows a client to outsource the data to
an untrusted server while ensuring privacy, obliviousness and
verifiability [89].
As a light note, it is worth to differentiate between multi
party computation and verifiable computation. The former
concept allows multiple distrusting parties to compute a function over joint inputs, but without revealing inputs of other
participating parties. A central secure server compute this
function on joint inputs, and securely sends the respective
results to corresponding parties. Auctions, private data aggregation of sensor networks, ranking, etc. are some beneficiary
examples of multi party computation (secure computation).
On the other hand, VC allows a resource constraint user to
outsource heavy computations to SPs. These SPs perform the
required computation on private data, and send back the computational results along with proof of correct verification to
user. The user then verifies results of computation in much
less time, and with less resources as per required to perform
original computations. In general, VC is a tool that can be
securely perform multi party computation [90].
2.3.2 Cryptography based approaches
2.3.2.1 Attribute based encryption (ABE)
With the advent of big data era, resource constraint users or
organization found it very difficult to securely save a bulk
15
amount of their data for future use. Therefore, many third
parties renowned as SPs came forth to store and managed the
vast data for the clients. However, questions of security and
tedious management of data such as access controls arise. For
instance, for securing the sensitive data, it is required to be
encrypted. Similarly, only legitimate user could access data
from SPs that is allowed by the competent authority. To cope
with these issues, the concept of attribute based encryption
(ABE) was put forward by scholastic community [91, 92].
Actually, ABE is a modified form of identity based encryption (IBE). More precisely, in an ABE system, ciphertext and
user key are labeled with specific attributes, so that only legitimate user with appropriate attributes could access and decrypt the ciphertext. Formally, ABE is further categorized
into two categories, namely, 1) key policy ABE (the access
structure is associated with user’s private key), and 2) ciphertext policy ABE (the access structure is embedded with ciphertext).
2.3.2.2 Fully homomorphic encryption (FHE)
To perform arbitrary computations over encrypted data remained an open question until 2009, when Gentry put forward a revolutionary work while presenting a fully homomorphic encryption (FHE) scheme [38]. In details, FHE allows Alice to encrypt data and send it to Bob such that Carol,
knowing only Bob’s public key and the ciphertext, is able to
perform arbitrary computation over the encrypted plaintext,
and could not learn anything about it. Thus, FHE guarantees
the confidentiality, and it is not required for Alice that which
computation will be performed. Although FHE provide data
privacy but unfortunately, it does not assure that the computations performed by Carol are correct. Further, the collusion
between Bob and Carol can also lead towards undesired problem. To overcome these issues, the research community has
presented a solution in the form of VC.
Formalizing the VC paradigm in a more appropriate way,
Gennaro et al. introduced a non-interactive VC scheme while
considering the scenario of untrusted workers [1]. The authors used FHE to present a way for lifting up one time outsourcing protocol to many time ones. They utilize Yao’s garbled circuit [49] along with FHE scheme (as a black box)
to present an outsourcing VC protocol that accepts the dynamic and adaptive inputs, and also provides input and output privacy if the acceptance bit of client is kept secret from
worker. In fact, input privacy is offered in a weaker model
where client is not allowed to ask queries from worker. Moreover, client has to undergo an expensive offline preprocess-
16
Front. Comput. Sci.
ing phase that results in larger public key size and higher
complexity. Later, in the online phase the time complexity of
client is reduced down to polylogarithmic, and that of worker
is reduced down to polynomial. Thus, client’s computational
investment of offline stage is amortized over the executions
of online stage. Furthermore, the online stage of this protocol is not fully non-interactive as it consists of two messages.
Following Gennaro et al., improved versions of delegation
scheme by employing FHE (as a black box) are presented
by Chung et al. [85]. The first improved version suggests to
remove public key generation as well as interaction of even
single message in the offline stage. The offline stage results in
only a secret key of polylogarithmic length. The second version trades off four messages interaction during offline stage
for reducing the computational time complexity of the client.
A pipelined implementation is also proposed in this paper, in
which the client utilizes a constant number of secret keys, and
keeps refreshing the secret keys during the online stage. In
this way, client does not need to re-execute the offline stage,
if rejection of the proof is revealed to worker. However, the
amortization of computational cost during offline stage over
executions of online stage is still an overhead of this proposal.
Furthermore, in the aforementioned proposals [1,85], the oracle access to adversaries could result in attacks that may cause
the leakage of secret information.
Until this time, the presented solutions of VC were in their
simplest form, i.e., these solutions either did not offer public
delegation and verification or just proposed weaker notions.
Focusing on this problem, Parno et al. presented a scheme, in
which a connection between VC and attribute based encryption (ABE) is established while offering public delegation and
public verifiability as well [86].The authors utilized one key
secure ABE (as a black box) to present a public VC protocol. In fact, VC scheme verifies the computation of a function from the class containing functions that satisfies the ABE
access policies. Unfortunately, there are very few classes of
functions for which an efficient ABE scheme exists, hence
this proposal is limited only to the computations that could be
expressed by polynomial sized Boolean formulas, and does
not work for any random polynomial time computations. In
addition, the scheme offers only selective security, which necessitates the adversary to inform about the point in advance
on that he wants to cheat. Another work is revocable publicly
VC presented by Alderman et al., where the cheating server,
once revoked can never perform further computation [93]. In
order to support their construction, the authors introduced an
entity termed as key distribution center that is responsible for
handling the generation, and distribution of the keys. Key
distribution center also issues the certificates to the honest
server to whom a delegator may outsource the computation.
Recently, the authors put forward some extension based on
this approach in the sense of access control [94] and hybrid
protocol [95].
MVC allows multiple clients to outsource the computation
of n-array function over a series of n joint inputs to server
while maintaining the data privacy of each client. Choi et
al. extended the notion of garbled circuit within single client
non-interactive VC submitted by Gennaro et al. [1] to MVC
by utilizing proxy oblivious transform [88]. Their outsourcing scheme requires that the computation of clients must be
independent of function complexity and the input string from
clients should be of the same size. Besides, clients cannot collude with the server in order to avoid malicious activities and
to maintain privacy of each client. Furthermore, their scheme
fails to achieve adaptive soundness, and is susceptible to selective failure attacks. Recently, Gordon et al. investigated the
simulation-based security mechanism that automatically detects many vulnerable selective failure attacks during MVC
session [96]. The authors presented MVC construction based
on falsifiable assumption that is feasible to work only when
clients are not allowed to collude with server.
The paradigm SCC towards VC is proposed by Papamanthou et al. [87], in which client delegates some computation
and sends a public key related to the function to untrusted
server. In response, server returns the results of computation
along with signature that verifies the correctness of the computation of function. The authors presented SSC schemes for
evaluating and differentiating the multivariate polynomials.
Their schemes are based on bilinear groups, and offer incremental updates to the signing keys, public verifiability and
adaptive security in random oracle model. Nonetheless, these
schemes are not function independent, and are limited to specific functions only. Another work related to verifiable delegated computation on streaming outsourced data is presented
by Backes et al. that offers input independent efficiency, unbounded storage, and function independence [97]. The authors used arithmetic circuits of degree up to 2 for expressing the functions. They introduced homomorphic MACs with
verification that is used as a base for their construction. Although their proposal contains some novel work, it only supports the functionalities of quadratic polynomials. Moreover,
it does not support deletion from dataset, and offers only private verification.
Another approach for providing VC protocol is to invoke multiple servers for parallel execution of computation.
Ananth et al. suggested a way to avoid the preprocessing cost
Haseeb AHMAD et al.
Primitives towards verifiable computation: a survey
overhead, and utilization of FHE by conceptualizing the use
of multiple servers [51]. Based on one way (DDH) assumption, the authors presented VC protocol that assures privacy
and soundness, while outsourcing the computation to two (n)
servers. Nonetheless, these protocols offer security until at
least one server among all remains honest.
2.3.3 Applications of verifiable computation
The perspective of VC is wide enough for offering many applications, such as secure delegation of confidential data, and
outsourcing of expensive computation to the weak clients and
businesses. Intensive applied approaches towards performing computations on previously authenticated data are presented by research community including verifiable keyword
search [50], fully homomorphic message authenticators [98]
and proofs of retrievability [99].
Verifiable keyword search paradigm offers a client to store
a large text file on the server, and later to query a keyword
from that text file. The server responds with yes or no answer, and then client needs to verify the correctness of keyword search result. A naive approach is to use the Merkle
tree for this purpose, while a better one is to encode the file
as polynomial for reducing verification complexity.
Fully homomorphic message authenticators approach allows any arbitrary client to perform computations over previously outsourced data, while producing a tag that (without using secret key) authenticates the result returned by the
server. Later, (without any knowledge of underlying data)
other client uses corresponding tag along with common secret key to verify the correctness of claimed computational
result performed over required authenticated data.
Proofs of retrievability scheme enables prover from server
side to convince the client that its previously outsourced file is
intact and can be retrieved in its eternity, while only retaining
the sufficient (not complete) file data from prover. Generally,
client needs to send a short challenge to prover, who returns a
short answer as reply without reading the entire claimed file.
2.3.3.1 Memory delegation and verifiable computation of
polynomials
Working towards secure delegation of confidential data,
Chung et al. presented the memory delegation scheme based
on PIR, and streaming delegation scheme based on FHE
[37]. The verification time of these schemes is logarithmic
to the input size. Although their general-purpose schemes are
reusable, they suffer from substantial overhead due to the usage of FHE, and offer soundness while relying on computational assumptions. In addition, their schemes necessitate
17
the client to be stateful, and also in the streaming delegation
the streaming size needs to be a-priori bounded. Benabbas
et al. notified these weaknesses, and presented an adaptively
secure VC constructions assuming PRF based on DDH [50].
Their main VC scheme deals with higher degree polynomial
functions, and utilizing this construction, the authors further
presented new paradigms for verifiable keyword search and
proofs of retrievability. Despite these constructions are not
generic, they do not offer public delegation/verification as
well. Moreover, assuming the subgroup membership problem in bilinear group of composite order, they nominated a
new primitive termed as verifiable database (VDB). Generally, VDB enables resource limited client for outsourcing the
database in tabular form to the server, and then to ask for
retrieval and updating the queries. Proceeding with this line
of research, Fiore and Gennaro presented publicly verifiable
outsourcing protocols for evaluation of high degree polynomials and matrix multiplication including their use in verifiable keyword search, proofs of retrievability, linear transformations, and discrete Fourier transform [100]. The authors
utilized the PRF with closed-form efficiency to get shorter
description of function. However, these constructions require
a long preprocessing phase, applicable to only specific class
of computations, and do not support streaming database.
The verifiable data streaming (VDS) paradigm is introduced by Schröder Dominique and Schröder Heike [101] that
enables a resource constraint client to outsource a long stream
of elements to untrusted server. Later, the client may update
or retrieve any element from server. The authors formalized
a notion of chameleon authentication tree to authenticate exponential number of elements. Following this authentication
procedure, their proposed protocol offers dynamic updates
and public verifiability of data in the database. Besides, security of their protocol depends on the discrete logarithm assumption. Though, the bandwidth and computational operations are bounded above on the number of streamed values.
Recently, Krupp et al. introduced a new notion of chameleon
vector commitment based on which they proposed a tree
based VDS scheme [102]. The cost of bandwidth and computational operations is reduced, and the scheme is secured
in the standard model.
Another applied perspective of VC is studied by Blanton
et al. to securely outsource and verify the large scale biometric computations [103]. The authors put forward a way to
securely outsource computation of all-pairs distance comparisons between two biometric datasets to the untrusted servers.
The servers perform comparisons, and provide the distance
matrix along with corresponding distribution. The distribu-
18
Front. Comput. Sci.
tion calculates the frequency of each discrete distance that
appears in the distance matrix. The data is operated in a
protected format throughout computation; hence, the servers
cannot extract any information from it. In order to verify the
correctness, client injects some fake biometric data items that
are carefully designed and are indistinguishable for servers
from the real ones. By comparing results returned by servers
with the expected resultant values of fake biometric items,
the client gets assurance about the integrity of all the computation performed by the servers. This construction is generic
and could be applied to any distance metric. However, the
higher computational cost at the client side is a major overhead.
2.3.3.2 Outsourcing of complex operations
The computational cost of encryption/decryption process of
ABE schemes rises with the complexity of access policy.
Therefore, a feasible solution is to outsource these computations without revealing the secret information. Li et al. proposed to outsource encryption process to Mapreduce cloud
under the security assumption that master and at least one
slave node must be honest [104]. They propose to split the
secret used within ciphertext into n shares by the Mapreduce;
in this way, slave nodes perform computation on provided
share to accomplish the process. Working towards delegation
for ABE, Sahai et al. introduced a new property of revocable
storage that allows the SPs to store ciphertexts for revoking
access on formerly encrypted data [105]. Formally, ciphertext delegation utilizes only public information for bounding a ciphertext (pursuing an access policy) to follow a more
restrictive access policy while maintaining the same size of
ciphertext. Although their scheme reduces down the complexity to polylogarithmic number of group elements but it
requires costly computations, and maintenance of more attributes. Furthermore, both of the aforementioned schemes
do not offer checking of computations. The delegation of
computation for different ABE operations is checked by the
mechanism given by Li et al. [106]. Their scheme offers to
outsource the partial key generation and the partial decryption process as well. However, the running time of encryption
mechanism can further be reduced [107].
Many VC protocols utilize the garbled circuits for joint
evaluation of functions, while providing absolute data privacy
to clients. Still, the evaluation of functions through garbled
circuit is an expensive task especially for weak devices, hence
necessitates for being outsourced to more capable infrastructure. Carter et al. proposed an outsourcing oblivious trans-
fer mechanism for securely delegating the evaluation task of
garbled circuit to SP [108]. Although their construction reduces the computational complexity intensively, it requires
the assurance of non-colluding behavior of any party with
the server. Besides, the cryptographic consistency checks of
input/output still remain the overheads to be considered for
improving efficiency. In another paper, Carter et al. notified
some limitations of earlier version, and provided an improved
protocol [109]. They improved the previous protocols while
eradicating the usage of oblivious transfer primitive by reversing the acts of participants and outsourcing the circuit
formation as well. In addition, their construction offers better efficiency by reducing communication rounds, and secure
even if one party colludes with the server.
Another costly operation in discrete-logarithm-based cryptographic primitives is the exponentiations modulo — a large
prime that is actually an onerous for the resource constrained
clients. Chen et al. provided a server aided algorithm for
outsourcing modular exponentiations in two untrusted program models by invoking the subroutine Rand [110]. Besides, they proposed the simultaneous modular exponentiations by invoking two modular exponentiations. They utilized their primitive for securely outsourcing the Schnorr
signatures and Cramer-Shoup encryptions. Working on the
same research line, Kiraz and Osmanbey utilized one untrusted server to propose improved outsourcing algorithms
with better checkability advantage for simultaneous modular exponentiations, such as for private base with public exponent, public base with private exponent, private base with
private exponent, and the most innovative private basis with
private exponents [111]. Besides, the authors upgraded the
two-simultaneous modular exponentiations up to more generalized t-simultaneous modular exponentiations that could
be computed within single round. It could be achieved by invoking a more efficient outsourcing algorithm having public
base with private exponent t times instead of private base with
private exponent; because there is no need to hide the base in
this scenario. Moving further, the application perspective in
sense of securely outsourcing the blind signatures and oblivious transform is also presented in the same paper.
3 Proof based implemented models of verifiable computation
The focus of research community towards VC remained
purely theoretical for decades. Until 2007, VC could consume several trillion years to verify a computation that, on
Haseeb AHMAD et al.
Primitives towards verifiable computation: a survey
the other hand, takes few milliseconds locally. In the following era, several theoretical refinements in interactive, PCPs
and FHE based constructions led to general purpose computations. Some of these constructions use the power of interaction, while the others are based on extracting a commitment; yet another domain proposes to hide the queries [62].
Nonetheless, the implementations of these constructions remain a hot question.
During the past few years, a number of projects aim
squarely at providing practical VC systems. These developments are brought into existence by leveraging the earlier theoretical concepts of PCPs or muggles, while reducing down the implementation overhead to several trillions of
magnitude. Formally, these approaches refine both the implementing systems and the theory; also, some of the projects
are implemented in a pipeline structure. The common framework of proof based verification models that accomplish VC
by utilizing two ends (1. Front End; 2. Back End) is presented in Fig. 5. More precisely, Front End can be considered
as program translator, as it translates programs into circuits,
while the Back End is made up of argument variants that are
used for verification purpose. The state of the art CPU architecture based proposals that work as Front End include
Refs. [59, 75, 76, 78, 112]. These systems offer good amortization cost and great programmability, but, the employed circuits are costly. On the other hand, the ASIC supportive Front
End [56–58, 60, 61, 65, 113] offers concise circuits, but with
worse amortization cost. The proposals that work for Back
End include Refs. [2, 73, 77, 78, 81]. Among these proposals,
Ref. [2] belongs to interactive proof, [73, 77, 78] refer to noninteractive arguments, while Ref. [81] is composed of interactive argument. Overall, Ref. [77] is considered as a central
construction for Back End.
Utilizing the power of interaction, Cormode et al. [55] proposed an efficient algebraic insight to bring down the running
time of prover to linearithmic compared to the original time
19
cost polynomial in Ref. [2]. They run their protocol on targeted problems, and showed that the approach saves significant time of verifier in practice; because the approach does
not use cryptographic operations. Yet, the overheads of this
approach are burdensome for the programmers, who have
to deal with arithmetic circuits for expressing the computation explicitly. Thaler et al. [114] implemented the aforementioned approach on GPU for improving the efficiency and to
speed up the computational task of prover by 40–100 times,
and also that of the already fast verifier by 100 times. Despite
the aforementioned two constructions employ streaming dependent model, in which all the data required by client for
processing cannot be stored, Vu et al. extended the work of
Ref. [55], and investigated an architecture called Allspice that
implemented the approach of Ref. [2] using a compiler in order to support high level language; a GPU for acceleration,
and a distributed server [65]. The costly offline phase and the
generation of inappropriate wiring arrangements for the circuits of batching model are counted as the major drawbacks
of their approach. In Ref. [115], Thaler worked out on large
classes of circuits with regular wiring patterns, and reduced
down the linearithmic factor of Ref. [55] up to linear. Nevertheless, this proposal is only limited to straight line computations.
A further refinement of the protocol [81] is suggested by
Ben-Or et al. while utilizing PCP based argument system,
known as Pepper [53]. This implementation of commitment
extraction based approach utilizes the arithmetic circuits over
finite fields, and reduces the costs of prover and verifier as
compared to Ref. [81]. However, it controls flow inefficiently,
represents non-integers clumsily, performs logical operations
and comparisons only by degenerating to complex Boolean
circuits. In addition, its models save verifier’s time in amortized sense only while performing several computations for
the batch, but are not suitable for single outsourcing computation. A more refined version of Pepper is also presented by
Fig. 5 Common framework of proof based verification models
20
Front. Comput. Sci.
Setty et al. known as Ginger that supports general purpose
programming model [54]. Ginger is basically built on linear
PCP, and supports parallelism across cores or GPUs in order to reduce the latency. Nonetheless, similar to Pepper, it
can only handle computations based on repeated structure.
Setty et al. further noticed that QAPs can be considered as
linear PCP, and presented another implementation known as
Zaatar [56]. In fact, Zaatar utilizes the compiler of Ginger
for producing constraints in order to yield linear PCPs, and
offers more generality for VC as compared to Ginger. Also,
Zaatar reduces the running time cost of verifier and prover
more deliberately. Regardless, the encoding of computations
as constraints, and the other inseparable costs of their protocol are major expenses. Besides, the compiling programs
are written in an academic language SFDL, which is unable
to support many of the essential primitive operations, such
as recursion and loops. Another built system called Clover
is nominated by Blumberg et al. that is built on the batching mechanism, adopted from Allspice for amortizing the
setup cost that is amortized over only single batch just like
the Zaatar amortization model. Unlike the former argument
systems, it requires less expensive cryptographic operations
(field operation) in setup phase. While considering the concrete cost of Clover with the other argument based systems
that require preprocessing (Allspice, Zaatar, etc.), it is remarked that Clover achieves either better or comparable performance [68]. For instance, Allspice is better in batching,
while Zaatar performs well in terms of verifier break even
points and prover costs. Clover batching performance is competitive to that of Allspice, while much better than Zaatar,
simultaneously prover costs is comparable to both systems.
In order to cease the interaction during VC process, Gennaro et al. poposed a non-interactive implementation called
Pinocchio, which utilizes regular QAP for encoding the circuits [77], and offers public verifiability as well as supports
ZK properties [57]. Specifically, Pinocchio system takes the
C code that we wish to outsource, and compiles it into a circuit which ultimately compiles it into QAP. These QAPs are
designed to be sent to some cryptographic protocols, such
as key generation, proof generation, and verification. Henceforth, the setup phase depends on the program, while the
key size and proof generation grow linearly with the circuit
side. In addition, their ZK-SNARKs implementation requires
loop iteration bounds, and array accesses to be compiled-time
constants, hence they do not support the programs with data
dependencies. Though, the construction cost of verification
proof is still high. Taking the approaches [2, 57] as basis,
Kosba et al. provided an implementation called Trueset that
utilizes the line by line approach in order to handle the set operations (subset of SQL queries) efficiently [113]. Additionally, it utilizes open-source libraries for pairing operations.
Also, it supports hybrid circuit composed of arithmetic and
set gates. Though efficient, Trueset does not offer privacy to
the confidential information in appropriate manner. Recently,
for optimizing Pinocchio system by reducing the cost of proof
generation and increasing the flexibility of prover for various
classes of computations, Costello et al. implemented a system known as Geppetto. Geppetto brings forward an innovative concept to utilize MultiQAPs (instead of single QAP) for
reducing the cost of sharing state during computation [58].
Formally, MultiQAPs allow prover (or verifier) to commit
the data one time; thereafter, it can be used in many related
subsequent proofs. The authors further provided a way of optimizing the notion of bounded proof bootstrapping (proofs
about proofs) by combining cryptographic embedding with
MultiQAPs. Another contribution of this approach is the energy saving circuits that take the proof generation closer to
the actual runtime.
The concept of TinyRAM is formalized by Ben-Sasson et
al., which is an unfolded execution of the general purpose
CPU proposed for nondeterministic computation and its verification [112]. Formally, this design is a combination of circuits and the proof machinery, which compiles the C program
into a circuit whose satisfiability assures the correct computation. TinyRAM offers a suitable solution for outsourcing
NP computation along with its succinct verification, while
conforming to the ZK properties (ZK-SNARKs). However,
this approach is suitable for space intensive computation, and
leads to shortage of memory if not dealt properly. Besides,
the program instructions are embedded into circuits, which
increases the amortization cost. In a subsequent work, the
authors proposed another system consisting of circuit generator and cryptographic proofs that is better in terms of
efficiency and functionality [59] as compared to aforementioned approach. The modified system retrieves the program
instructions directly from RAM, so that executions having
equal number of stages utilize the same circuit. Therefore,
the amortization cost reduces down in an efficient manner.
Since its circuit size is much larger than the above mentioned
approaches, even if the key is generated only one time for
each circuit, a very large proving key is produced. This large
key needs to be stored, and accessed at each proof generation,
hence this proposal is only suitable for short execution.
Taking the aforementioned limitation into account, Braun
et al. proposed a built system called Pantry that extends the
system model of Pinocchio and Zaatar, and supports com-
Haseeb AHMAD et al.
Primitives towards verifiable computation: a survey
paratively long program executions [60]. Unlike previous approaches, Pantry suggested the concept of storage abstraction, and provided stateful computation. Formally, the computations are expressed utilizing the digests that authenticate
to state, and then that computations are verifiably outsourced.
Thereafter, prover works with state that matches with the
digest. Pantry supports verifiable Map Reduce operations,
data dependent memory access, and verifiable queries on isolated database. Although Pantry is efficient as compared to
the aforementioned approaches in many ways, still the proof
construction and memory access cost is very high. Recently,
Wahby et al. proposed a hybrid built system called Buffet
based on the system models of Tiny RAM and Pantry [61].
Formally, Buffet utilizes the expressiveness of Tiny RAM architecture, and line-by-line compilation model of Pantry for
providing better efficiency and more generic system model.
In addition, Buffet reduces down the memory operational
cost of Pantry by utilizing RAM abstraction approach of Tiny
RAM, despite the amortization cost of Buffet is worse and
does not support many programming languages as compared
to Tiny RAM. For a more comprehensive trade off comparison in terms of expressiveness, cost and functionality, the
interested readers are referred to Fig. 2 of Ref. [62].
4
Discussion and future work
The ultimate goal of VC was well described by Babai et
al. [9] in 1991. The authors pointed towards a setup, utilizing
which, only single trusted PC could check the computation
done by a mob of supercomputers, working with intensively
powerful but untrusted software and unreliable hardware. The
research community is now standing on verge of achieving
such setup, hence making serious efforts for transforming
this theoretical concept into practical realization. Especially
during the previous years, the scholastic community has provided tremendous improvements towards both theoretical and
practical aspects. Many works provided different tradeoffs
between efficiency, generality, interactivity, public verifiability, and ZK. Nonetheless, there still remains an immense requirement to optimize the efficiency, and performance of VC
machinery.
The aim of the VC is to provide a system for securely
outsourcing the computations to powerful servers, and get
back precise results while maintaining the privacy. These outsourcing protocols are expected to be unconditional (no assumption about verifier or prover), generic (arbitrary computation could be performed) and practical (feasible implemen-
21
tations). Besides, the computational overheads for prover and
verifier during verification process should be less than those
of the computation itself. More specifically, we require that
the running time for verifier and communication cost should
be constant or logarithmic; likewise, for prover, the running
time should be linear or quasi linear in the steps required
for computation of function. Technically, we need improvements in the performance of several magnitude in context of
verifier and especially for prover, while making standard assumptions. However, we are less worried about preprocessing, non-interactivity and succinctness up to bearable extent.
Through sincere efforts, successful implementation of VC
protocols has reduced the computational overhead for verifier
several magnitudes lesser than the computation itself (though
for specific computations only). However, prover computational complexity is still higher. The successful implementation of such systems, therefore, does not yet guarantee the
practicality of VC.
Although theoretical proposals have provided basis to VC
machinery, they still comprise of several limitations to be focused, in order to make them more effective and efficient.
The basic drawback of audit based schemes and hardware
based solutions is lack of generality, as these solutions work
only for arbitrary computations, and necessitate for a trusted
chain, while making assumption that auditors and hardware
work correctly in the respective domains. A possible solution for the audit based approach could be a tree based payment mechanism that can handle the fair payments issues
of workers (payments of n jobs can be completed in log n
steps), and give them enough incentives for the computations
with respect to their efficiency. The major issue with interactive proofs is huge difference between computational powers of prover (super-polynomial) and a weak verifier (polynomial). Hence, powerful prover could convince weak verifier about the correctness of statements which, otherwise,
could not be computed by verifier itself. Therefore, a possible open question is to obtain a protocol, in which honest
prover could run in polynomial time and space, while the
communication and running time complexity of verifier could
be based on some known protocol. Theoretical PCP based
approaches provide very long proofs; hence it is not feasible for verifier to check the computations efficiently. Therefore, open problems related to PCPs might be the mechanisms that could work without circuits. Moreover, efficient
transformations from programs to circuits, and the construction of simple, efficient and short PCPs might be the areas
of focus in future developments. Similarly, approaches based
on arguments rely on unrealistic assumptions and suffer from
22
Front. Comput. Sci.
high computational complexity, so the need of the hour is to
focus on more realistic assumptions that could reduce overall complexity in an efficient manner. For example, although
the known SNARK lifting transformations towards ZK (even
based on non-standard or unrealistic assumptions) are independent of computation size, the size of these proof are linear to witness. Therefore, in future this proof size can be reduced up to sub-linear or more. FHE based solutions are yet
hard enough to implement [116], as it would take several trillion years for performing computations. Therefore, they are
yet not suitable for practical VC solutions. An open direction could be towards building efficient FHE schemes with
reduced complexities. The same goal can also be achieved by
replacing FHE schemes with other cryptographic protocols,
e.g., efficient Garbled circuits could be put forward for offering the same privacy mechanism. Similarly, a UC secure
ABE scheme is required, based on which, an efficient VC
protocol could be proposed. On the same lines, an indirect
revocable predicate encryption scheme would be the future
problem that can provide foundation for constructing a publicly verifiable outsourcing scheme.
There are several other research gaps in the emerging field
of VC that necessitate the scholastic community to focus
more deliberately. For example, one can construct a VC protocol based on falsifiable assumptions, because the former
solutions that realize the client-server collusions impose the
usage of an extractable witness based encryption scheme.
Another open research direction could be the construction
of a multi-client VC scheme using standard assumptions, in
which online setup cost would be independent of the number of parties and the depth of the circuit. A fascinating open
problem is to find an efficient algorithm, based on which,
secure outsourcing of modular exponentiations could be accomplished while utilizing only one untrusted server. Besides, a private VC protocol could be constructed, in which
a client can send separate computations to different servers,
while acquiring verification on the returned results in dispersed manner. By employing this approach, one can reduce the untrusted server collusions. Similarly, an efficient
outsourcing algorithm for secure cryptographic computations
could be investigated that could reduce the communication
overhead, while require fewer modular multiplications, and
forbid modular inversions unlike the former proposals. A possible future direction towards securely outsourcing the data
to untrusted server might be a verifiable oblivious storage
mechanism that could be constructed without utilizing FHE
or SNARKs, in order to outperform oblivious RAM, while
minimizing bandwidth overhead. In addition, a more real ap-
plication in the form of verifiable keyword search mechanism, which could support dynamic database updates, is instantly required. More recent emerging venue is the internet
of things (IOTs) that are working with limited resources (battery, storage, processing, etc.) as per requirement of their
domain [116]. Thus, secure outsourcing of their tasks could
be entertained by providing effective and efficient VC mechanisms, therefore, it could be considered as a very interesting
research domain. Henceforth, the need of the hour is to provide lightweight ciphers (and other required tools) in order to
support VC protocols. Another domain which demands the
attention of researchers is the construction of practical VC
protocols that could even resist the quantum attacks in future [117].
While taking proof based implemented systems into consideration, the biggest issue is the very high complexity of
prover work. Similarly, performance of such systems and verifier per instance cost are still the overheads to be dealt with
in future. There could be some hybrid implementations hereafter, such as Buffet that would take the advantage of their
preceding models implicitly for providing more generalized
and practical applications. The critical area to be focused in
future is to pay attention on execution or computational models with appropriate and supportive programming language
that could efficiently compile, while not compromising on expressiveness for providing practical and generic VC systems.
More precisely, the need of the hour is to focus on computational models, to develop more generic programming languages, and beyond this all, to reduce verifier and prover
overheads. Many shortcomings that could be addressed in
efficient manners are presented in Table 1. The other target
domains are the scenarios where prover cost may be compromised in order to achieve the prioritized systems. In order to uphold the practical advancements, motivations for the
theoretical advancements, such as standard assumptions, are
needed. Besides, many of the existing systems might be upgraded in terms of public verifiability and ZK, and could be
made more efficient via quality analysis or through stronger
assumptions. Another interested research domain could be
the special purpose proof systems that may allow to securely
outsource the problematic cryptographic and image processing relevant operations.
Albeit of all the aforementioned inadequacies, the encouraging news is that we are now capable of executing protocols or algorithms that were thought to be papers work only
about few years ago. We are witnessing the better security
mechanisms with comparatively low overhead. Especially,
VC would be the helping hand in other research directions.
Haseeb AHMAD et al.
Primitives towards verifiable computation: a survey
23
Table 1
Comparison of the state-of-the-art verifiable computation approaches
Approach
Underlying
Supported
Reference
Contributions
type
tools
statement
Security model/
Assumptions
Yao secure,
Semantic Secure
FHE
Shortcomings
Setup
work
Costly
preprocessing
Expensive
No
No
Cheap
No
Yes
Expensive
Yes
Yes
Expensive
Yes
No
[1]
Non-interactive
Formalized VC
FHE,
Garble Circuit
P
[2]
Interactive
Proof for
Muggles
PIR
NC
The existence
of PIR
[7]
Interactive
Interactive proof
system, ZK
ZK
NP
Intractability
assumption
[8]
Interactive
Arthur-Merlin
(AM) class
Elementary
combinatorial
arguments
NP
Random oracle
[10]
PCP
NP = PCP
PCP theory
NP
-
[12]
[14]
PCP
CS
PCP to ZK
CS proofs
CRH, PCP
ZK, PCP
NP
NP
[15]
Interactive
IP=PSPACE
Public-coins
P
CRH
Random oracle
One way
function
ATMs
NC
-
NP is
non-relativizing
Not practical
Not practical
High prover
complexity
High prover
complexity
FHE, PIR
P
Secure FHE
and PIR
CS proof, FHE
NP
Secure FHE
and DDH
Algebraic PRF
NP
d-SDDH
[17]
[37]
[40]
[50]
Interactive
protocol
Memory
delegation
Cryptographic
scheme
Designated
verifier CS
CS
proof system
VC for large
Cryptographic polynomials, VDB
Interactive
High prover
and verifier
complexity
Composition of
ZK is not
closed
Weak security
Expensive
No
No
Cheap
Cheap
Yes
Yes
Yes
Yes
Cheap
No
Yes
Cheap
Yes
No
Costly
preprocessing
Expensive
No
No
High prover
and verifier
complexity
Expensive
No
Yes
Not generic
Expensive
No
No
Expensive
No
No
Expensive
No
No
Cheap
No
No
Expensive
No
No
Expensive
Yes
Yes
Expensive
Yes
Yes
Expensive
Yes
Yes
Expensive
No
Yes
Expensive
Yes
Yes
Expensive
No
No
NP
q-PKE, q-PDH
and 2q-SDH
q-PKE, q-PDH
and q-SDH
Not generic,
high prover
complexity
Not generic,
high prover
complexity
Limited
expressiveness
High prover
and verifier
complexity
Costly proof
construction
High prover
complexity
NP
Verifier oracle
Large circuit size
[53]
PCP
Pepper
PCP, efficient
arguments
NP
Semantic security
[54]
PCP
Ginger
Parallel GPU,
linear PCP
NP
Semantic security
[55]
Interactive
Implemented
VC system
Muggles
NC
Unconditionally
secure
[56]
PCP
Zaatar
QAPs, linear PCP
NP
Semantic security
[57]
Non-interactive
Pinocchio
QAP, ZK
NP
[58]
Non-interactive
Geppetto
MultiQAPs,ZK
[59]
ZK
Implemented
VC system
QAPs, TinyRAM
[60]
PCP
Pantry
Pinocchio, Zaatar
NP
Semantic security
[61]
PCP
Buffet
TinyRAM, Pantry
NP
Semantic security
[65]
Interactive
Allspice
Improved [61],
Zaatar
NP
Semantic security
[68]
Interactive
Allspice, Zaatar
Pedersen
commitment
Semantic security
ZK
Clover
NIZK argument
system
NP
[73]
NP
q-CPDH, q-PKE
[74]
Non-interactive
Trueset
QSPs, SNARKs
NP
q-PKE, q-PDH
and q-SDH
NP
q-PDH, q-PKE
High setup and
prover cost
High setup and
prover cost
Weak
computational
model
Not generic
High prover
complexity
Inappropriate
privacy mechanism
Non-adaptive
soundness
NP
Semantic
security
ABE
NC
One time secure
Signature
ABE, FHE,
Garble Circuit
Chameleon
Authentication
Tree
TinyRAM,
linear PCP
NP
d-SDH
CRS, Selfregistered PKI
Discrete
logarithm
assumption
[77]
ZK
QSPs, QAPs
[81]
PCP
Argument system
without short
PCP
[96]
Publicly verifiable
scheme from
ABE
Cryptographic
SCC
Multi-client
Non-interactive
VC
[101]
Cryptographic
VDS
[112]
ZK
ZK-SNARKs
[86]
[87]
Cryptographic
QSP, knowledge
of exponent
Additive
Homomorphic
Encryption,
Hadamard PCP
NP
NP
NP
Verifier oracle
Publicly
Zero
verifiable Knowledge
Expensive
No
No
Expensive
Yes
Yes
Expensive
Yes
No
Expensive
Yes
Yes
Costly
preprocessing
Expensive
No
No
Not generic,
selective security
Expensive
Yes
No
Expensive
Yes
No
Expensive
Yes
Yes
Expensive
Yes
No
Expensive
Yes
Yes
Not generic
Non-colluding
assumption
Bound on
bandwidth and
computation
Storage space
bounded
24
Front. Comput. Sci.
We hope that in near future the common users would be the
beneficiaries of appropriate security mechanisms.
nications of the ACM, 2002, 45(11): 56–61
6. Mell P, Grance T. The NIST definition of cloud computing. National
Institute of Standards and Technology, 2009, 53(6): 50
5
Conclusion
VC paradigm has become mainstream solution for securely
outsourcing the computations that, otherwise, are infeasible
to perform locally by the resource limited clients or the organizations. We observed that within the VC domain, the work
done by majority of researchers in the period 1995–2005 focused on providing the foundations, while those who excelled
in this domain from 2005 to 2010 diverted most of their attention over utilizing the foundation work for constructing
VC protocols. In the latest five years, we have witnessed exceeding focus on VC schemes and related streams. Although
several implementations of such systems have been presented
in recent years, which put the VC on the verge of practicality,
there still remains a dire need of advancement and research
in this domain. Our paper presents an explicate study on VC,
and provides the chronological research developments held
in this field since 1985. We provided a chronological study,
while classifying the presented schemes as well as the protocols with respect to their research domains. We discussed
diverse research contributions while providing a critical analysis in terms of their pros and cons, applications, and hypes
that will help researchers to understand the underlying concepts, and conventional challenges towards future contributions.
7. Goldwasser S, Micali S, Racko C. The knowledge complexity of interactive proof systems. SIAM Journal on computing, 1989, 18(1):
186–208
8. Babai L. Trading group theory for randomness. In: Proceedings of
the 17th Annual ACM Symposium on Theory of Computing. 1985,
421–429
9. Babai L, Fortnow L, Levin L A, Szegedy M. Checking computations
in polylogarithmic time. In: Proceedings of the 23rd Annual ACM
Symposium on Theory of Computing. 1991, 21–32
10.
Arora S, Safra S. Probabilistic checking of proofs: a new characterization of NP. Journal of the ACM, 1998, 45(1): 70–122
11.
Arora S, Lund C, Motwani R, Sudan M, Szegedy M. Proof verification and the hardness of approximation problems. Journal of the
ACM, 1998, 45(3): 501–555
12.
Kilian J. A note on efficient zero-knowledge proofs and arguments.
In: Proceedings of the 24th Annual ACM Symposium on Theory of
Computing. 1992, 723–732
13.
Kilian J. Improved efficient arguments. In: Proceedings of Annual
14.
Micali S. Computationally sound proofs. SIAM Journal on Comput-
International Cryptology Conference. 1995, 311–324
ing, 2000, 30(4): 1253–1298
15.
Shamir A. IP= PSPACE. Journal of the ACM, 1992, 39(4): 869–877
16.
Lund C, Fortnow L, Karlo H, Nisan N. Algebraic methods for interactive proof systems. Journal of the ACM, 1992, 39(4): 859–868
17.
Chung K M, Kalai Y, Vadhan S. Improved delegation of computation using fully homomorphic encryption. In: Proceedings of Annual
Cryptology Conference. 2010, 483–501
18.
Acknowledgements This work was supported by the National Natural Science Foundation of China (NSFC) (Grant Nos. 61370194, 61411146001,
and 61502048).
Goldwasser S, Sipser M. Private coins versus public coins in interactive proof systems. In: Proceedings of the 18th Annual ACM Symposium on Theory of Computing. 1986, 59–68
19.
Ben-Or M, Goldwasser S, Kilian J, Wigderson A. Multi-prover interactive proofs: how to remove intractability assumptions. In: Proceedings of the 20th Annual ACM Symposium on Theory of Computing.
References
1. Gennaro R, Gentry C, Parno B. Non-interactive verifiable comput-
1988, 113–131
20.
two-prover interactive protocols. Computational Complexity, 1991,
ing: outsourcing computation to untrusted workers. In: Proceedings
1(1): 3–40
of Annual Cryptology Conference. 2010, 465–482
2. Goldwasser S, Kalai Y T, Rothblum G N. Delegating computation:
21.
Goldreich O, Micali S,Wigderson A. Proofs that yield nothing but
their validity or all languages in NP have zero-knowledge proof sys-
interactive proofs for muggles. In: Proceedings of the 40th Annual
ACM Symposium on Theory of Computing. 2008, 113–122
Babai L, Fortnow L, Lund C. Non-deterministic exponential time has
tems. Journal of the ACM, 1991, 38(3): 690–728
22.
Ben-Or M, Goldreich O, Goldwasser S, Håstad J, Kilian J, Micali S,
3. Anderson D P. Public computing: reconnecting people to science. In:
Rogaway P. Everything provable is provable in zero-knowledge. In:
Proceedings of Conference on Shared Knowledge and the Web. 2003,
Proceedings of Conference on the Theory and Application of Cryp-
17–19
tography. 1988, 37–56
4. Anderson D P. BOINC: a system for public-resource computing and
23.
storage. In: Proceedings of the 5th IEEE/ACM International Work-
ings of the 19th Annual ACM Symposium on Theory of Computing.
shop on Grid Computing. 2004, 4–10
5. Anderson D P, Cobb J, Korpela E, Lebofsky M, Werthimer D.
SETI@home: an experiment in public-resource computing. Commu-
Fortnow L. The complexity of perfect zero-knowledge. In: Proceed1987, 204–209
24.
Aiello W, Hastad J. Perfect zero-knowledge languages can be recognized in two rounds. In: Proceedings of the 28th Annual Symposium
Haseeb AHMAD et al.
Primitives towards verifiable computation: a survey
on Foundations of Computer Science. 1987, 439–448
architectural support for protecting virtual machine privacy in untrusted cloud environment. In: Proceedings of the ACM International
25. Feige U, Fiat A, Shamir A. Zero-knowledge proofs of identity. Jour-
Conference on Computing Frontiers. 2013
nal of Cryptology, 1988, 1(2): 77–94
26. Fiat A, Shamir A. How to prove yourself: Practical solutions to iden-
25
43.
Seshadri A, Luk M, Shi E, Perrig A, van Doorn L, Khosla P. Pio-
tification and signature problems. In: Proceedings of Conference on
neer: verifying code integrity and enforcing untampered code execu-
the Theory and Application of Cryptographic Techniques. 1986, 186–
tion on legacy systems. ACM SIGOPS Operating Systems Review,
2005, 39(5): 1–16
194
27. Goldreich O, Oren Y. Definitions and properties of zero-knowledge
44.
Computers. Springer Science & Business Media, 2011
proof systems. Journal of Cryptology, 1994, 7(1): 1–32
28. Feige U, Shamir A. Zero knowledge proofs of knowledge in two
45.
Malkhi D, Reiter M. Byzantine quorum systems. Distributed Com-
46.
Canetti R, Riva B, Rothblum G N. Practical delegation of computation
puting, 1998, 11(4): 203–213
rounds. In: Proceedings of Conference on the Theory and Application of Cryptology. 1989, 526–544
using multiple servers. In: Proceedings of the 18th ACM Conference
29. Goldreich O, Kahan A. A How to construct constant-round zero
on Computer and Communications Security. 2011, 445–454
knowledge proof systems for NP. Journal of Cryptology, 1996, 9(3):
167–189
47.
Monrose F, Wycko P, Rubin A D. Distributed Execution with Remote
48.
Belenkiy M, Chase M, Erway C C, Jannotti J, Küpçü A, Lysyanskaya
Audit. InNDSS, 1999, 99: 3–5
30. Groth J, Ostrovsky R, Sahai A. Perfect non-interactive zero knowledge for NP. In: Proceedings of Annual International Conference
Parno B, McCune J M, Perrig A. Bootstrapping Trust in Modern
on the Theory and Applications of Cryptographic Techniques. 2006,
A. Incentivizing outsourced computation. In: Proceedings of the 3rd
339–358
International Workshop on Economics of Networked Systems. 2008,
85–90
31. Micali S, Rabin M, Kilian J. Zero-knowledge sets. In: Proceedings
of the 44th Annual IEEE Symposium on Foundations of Computer
49.
the 27th Annual Symposium on Foundations of Computer Science.
Science. 2003, 80–91
1986, 162–167
32. Feige U, Goldwasser S, Lovász L, Safra S, Szegedy M. Approximating clique is almost NP-complete. In: Proceedings of the 32nd Annual
50.
Colloquium on Automata, Languages, and Programming. 2008, 536–
ference. 2011, 111–131
51.
FHE and without pre-processing. In: Proceedings of International
34. Kalai Y T, Raz R. Probabilistically checkable arguments. In: Halevi S,
Science, vol 5677. Berlin: Springer, 2009, 143–159
Workshop on Public Key Cryptography. 2014, 149–166
52.
ternational Conference on the Theory and Applications of Crypto-
ence on Computer and Communications Security. 2014, 844–855
53.
ings of the 20th Annual ACM Symposium on Theory of Computing.
36. Bitansky N, Canetti R, Chiesa A, Tromer E. From extractable colliand back again. In: Proceedings of the 3rd Innovations in Theoretical
1988, 113–131
54.
In: Proceedings of USENIX Security Symposium. 2012, 253–268
55.
ceedings of Annual Cryptology Conference. 2011, 151–168
Innovations in Theoretical Computer Science Conference. 2012, 90–
the Doctoral Degree. Stanford: Stanford University, 2009
112
56.
SIAM Journal on Computing, 2008, 38(5): 1661–1694
putation. In: Proceedings of the 8th ACM European Conference on
out Rejection Problem from Designated Verifier CS-Proofs. IACR
Computer Systems. 2013, 71–84
57.
41. Sadeghi A R. Trusted computing—special aspects and challenges. In:
42. Wen Y, Lee J, Liu Z, Zheng Q, Shi W, Xu S, Suh T. Multi-processor
Parno B, Howell J, Gentry C, Raykova M. Pinocchio: nearly practical verifiable computation. In: Proceedings of IEEE Symposium on
Proceedings of International Conference on Current Trends in Theory
and Practice of Computer Science. 2008, 98–117
Setty S, Braun B, Vu V, Blumberg AJ, Parno B, Walfish M. Resolving the conflict between generality and plausibility in verified com-
40. Goldwasser S, Lin H, Rubinstein A. Delegation of Computation withCryptology ePrint Archive, 2011, 2011: 456
Cormode G, Mitzenmacher M, Thaler J. Practical verified computation with streaming interactive proofs. In: Proceedings of the 3rd
38. Gentry C. A fully homomorphic encryption scheme. Dissertation for
39. Barak B, Goldreich O. Universal arguments and their applications.
Setty S T, Vu V, Panpalia N, Braun B, Blumberg A J,Walfish M. Taking proof-based verified computation a few steps closer to practicality.
Computer Science Conference. 2012, 326–349
37. Chung K M, Kalai Y T, Liu F H, Raz R. Memory delegation. In: Pro-
Ben-Or M, Goldwasser S, Kilian J, Wigderson A. Multi-prover interactive proofs: how to remove intractability assumptions. In: Proceed-
graphic Techniques. 1999, 402–414
sion resistance to succinct non-interactive arguments of knowledge,
Fiore D, Gennaro R, Pastro V. Efficiently verifiable computation on
encrypted data. In: Proceedings of the 2014 ACMSIGSAC Confer-
35. Cachin C, Micali S, Stadler M. Computationally private information
retrieval with polylogarithmic communication. In: Proceedings of In-
Ananth P, Chandran N, Goyal V, Kanukurthi B, Ostrovsky R. Achieving privacy in verifiable computation with multiple servers—without
547
eds. Advances in Cryptology-CRYPTO. Lecture Notes in Computer
Benabbas S, Gennaro R, Vahlis Y. Verifiable delegation of computation over large datasets. In: Proceedings of Annual Cryptology Con-
Symposium on Foundations of Computer Science. 1991, 2–12
33. Kalai Y T, Raz R. Interactive PCP. In: Proceedings of International
Yao A C. How to generate and exchange secrets. In: Proceedings of
Security and Privacy. 2013, 238–252
58.
Costello C, Fournet C, Howell J, Kohlweiss M, Kreuter B, Naehrig
M, Parno B, Zahur S. Geppetto: versatile verifiable computation. In:
26
Front. Comput. Sci.
Proceedings of IEEE Symposium on Security and Privacy. 2015, 253–
77.
national Conference on the Theory and Applications of Cryptographic
59. Ben-Sasson E, Chiesa A, Tromer E, Virza M. Succinct non-interactive
Techniques. 2013, 626–645
zero knowledge for a von Neumann architecture. In: Proceedings of
the 23rd USENIX Security Symposium. 2014
Gennaro R, Gentry C, Parno B, Raykova M. Quadratic span programs
and succinct NIZKs without PCPs. In: Proceedings of Annual Inter-
270
78.
Lipmaa H. Succinct non-interactive zero knowledge arguments from
60. Braun B, Feldman A J, Ren Z, Setty S, Blumberg A J, Walfish M.
span programs and linear error-correcting codes. In: Proceedings of
Verifying computations with state. In: Proceedings of the 24th ACM
International Conference on the Theory and Application of Cryptology and Information Security. 2013, 41–60
Symposium on Operating Systems Principles. 2013, 341–357
61. Wahby R S, Setty S T V, Ren Z, Blumberg A J, Walfish M. Efficient
79.
shift and product. In: Proceedings of International Conference on
RAM and control flow in verifiable outsourced computation. IACR
Cryptology and Network Security. 2013, 92–121
Cryptology ePrint Archive, 2014, 2014: 674
62. Walfish M, Blumberg A J. Verifying computations without reexecut-
80.
Lipmaa H. Almost optimal short adaptive non-interactive zero knowl-
81.
Ishai Y, Kushilevitz E, Ostrovsky R. Efficient arguments without short
edge. IACR Cryptology ePrint Archive, 2014, 2014: 396
ing them. Communications of the ACM, 2015, 58(2): 74–84
63. Bennet S Y. Using secure coprocessors. Dissertation for the Doctoral
PCPs. In: Proceedings of the 22nd Annual IEEE Conference on Com-
Degree. Pittsburgh: Carnegie Mellon University, 1994
putational Complexity. 2007, 278–291
64. Smith S W, Weingart S. Building a high-performance, programmable
secure coprocessor. Computer Networks, 1999, 31(8): 831–860
82.
Europe. 2008, 75–185
teractive verifiable computation. In: Proceedings of IEEE Symposium
83.
the 45th Annual ACM Symposium on Theory of Computing. 2013,
ACM Symposium on Principles of Distributed Computing. 2013
84.
interactive proofs for muggles. Journal of the ACM, 2015, 62(4): 27
13th USENIX Conference on Hot Topics in Operating Systems. 2011
85.
846
Cryptology Conference. 2010, 483–501
86.
Proceedings of Theory of Cryptography Conference. 2012, 422–439.
87.
70. Goldreich O. Zero-Knowledge twenty years after its invention. IACR
Papamanthou C, Shi E, Tamassia R. Signatures of correct computation. In: Sahai A, eds. Theory of Cryptography. Lecture Notes in
Cryptology ePrint Archive, 2002, 2002: 186
71. Blum M, Feldman P, Micali S. Non-interactive zero-knowledge and
Parno B, Raykova M, Vaikuntanathan V. How to delegate and verify
in public: verifiable computation from attribute-based encryption. In:
69. Goldreich O. Modern Cryptography, Probabilistic Proofs and Pseudorandomness. Springer Science & Business Media, 1998
Chung K M, Kalai Y, Vadhan S. Improved delegation of computation using fully homomorphic encryption. In: Proceedings of Annual
68. Blumberg A J, Thaler J, Vu V, Walfish M. Verifiable computation using multiple provers. IACR Cryptology ePrint Archive, 2014, 2014:
Setty S, Blumberg A J, Walfish M. Toward practical and unconditional verification of remote computations. In: Proceedings of the
793–802
67. Goldwasser S, Kalai Y T, Rothblum G N. Delegating computation:
Xu G, Amariucai G, Guan Y. Delegation of computation with verification outsourcing: curious verifiers. In: Proceedings of the 2013
66. Rothblum G N, Vadhan S, Wigderson A. Interactive proofs of proximity: delegating computation in sublinear time. In: Proceedings of
Di Crescenzo G, Lipmaa H. Succinct NP proofs from an extractability assumption. In: Proceedings of Conference on Computability in
65. Vu V, Setty S, Blumberg AJ, Walfish M. A hybrid architecture for inon Security and Privacy. 2013, 223–237
Fauzi P, Lipmaa H, Zhang B. Efficient modular NIZK arguments from
Computer Science, Vol 7785. Berlin: Springer, 2013, 222–242
88.
Choi S G, Katz J, Kumaresan R, Cid C. Multi-client non-interactive
its applications. In: Proceedings of the 20th Annual ACM Symposium
verifiable computation. In: Sahai A, eds. Theory of Cryptography.
on Theory of Computing. 1988, 103–112
Lecture Notes in Computer Science, Vol 7785. Berlin: Springer, 2013,
72. Lapidot D, Shamir A. Publicly verifiable non-interactive zero knowledge proofs. In: Proceedings of Conference on the Theory and Appli-
499–518
89.
cation of Cryptography. 1990, 353–365
age. In: Proceedings of International Workshop on Public Key Cryp-
73. Groth J. Short pairing-based non-interactive zero-knowledge arguments. In: Proceedings of International Conference on the Theory and
tography. 2014, 131–148
90.
Application of Cryptology and Information Security. 2010, 321–340
Laud P, Pankova A. Verifiable computation in multiparty protocols
with honest majority. In: Proceedings of International Conference on
74. Bitansky N, Canetti R, Chiesa A, Tromer E. Recursive composition
and bootstrapping for SNARKs and proof-carrying data. In: Proceed-
Apon D, Katz J, Shi E, Thiruvengadam A. Verifiable oblivious stor-
Provable Security. 2014, 146–161
91.
Sahai A, Waters B. Fuzzy identity-based encryption. In: Proceedings
ings of the 45th Annual ACM Symposium on Theory of Computing.
of Annual International Conference on the Theory and Applications
2013, 111–120
of Cryptographic Techniques. 2005, 457–473
75. Ben-Sasson E, Chiesa A, Tromer E, Virza M. Scalable zero knowl-
92.
Goyal V, Pandey O, Sahai A, Waters B. Attribute-based encryption for
edge via cycles of elliptic curves. In: Proceedings of International
fine-grained access control of encrypted data. In: Proceedings of the
Cryptology Conference. 2014, 276–294
13th ACM Conference on Computer and Communications Security.
2006, 89–98
76. Chiesa A, Tromer E, Virza M. Cluster computing in zero knowledge.
In: Proceedings of Annual International Conference on the Theory
and Applications of Cryptographic Techniques. 2015, 371–403
93.
Alderman J, Janson C, Cid C, Crampton J. Revocation in publicly
verifiable outsourced computation. In: Proceedings of International
Haseeb AHMAD et al.
Primitives towards verifiable computation: a survey
Workshop on Public Key Cryptography. 2014, 51–71
94. Alderman J, Janson C, Cid C, Crampton J. Access control in publicly
Computer Security Applications Conference. 2014, 266–275
110.
verifiable outsourced computation. In: Proceedings of the 10th ACM
and Distributed Systems, 2014, 25(9): 2386–2396
111.
95. Alderman J, Janson C, Cid C, Crampton J. Hybrid publicly verifiable
Kiraz M S, Uzunkol O. Efficient and verifiable algorithms for secure
outsourcing of cryptographic computations. International Journal of
computation. In: Proceedings of Cryptographers’ Track at the RSA
Conference. 2016, 147–163
Chen X, Li J, Ma J, Tang Q, Lou W. New algorithms for secure outsourcing of modular exponentiations. IEEE Transactions on Parallel
Symposium on Information, Computer and Communications Security. 2015, 657–662
27
Information Security, 2016, 15(5): 519–537
112.
Ben-Sasson E, Chiesa A, Genkin D, Tromer E, Virza M. SNARKs for
96. Gordon S D, Katz J, Liu F H, Shi E, Zhou H S. Multi-client verifi-
C: verifying program executions succinctly and in zero knowledge.
able computation with stronger security guarantees. In: Proceedings
In: Canetti R, Garay J A, eds. Advances in Cryptology – CRYPTO
of Theory of Cryptography Conference. 2015, 144–168
2013. Lecture Notes in Computer Science, Vol 8043. Berlin: Springer,
2013, 90–108
97. Backes M, Fiore D, Reischuk R M. Verifiable delegation of computation on outsourced data. In: Proceedings of the 2013 ACM SIGSAC
113.
Conference on Computer & Communications Security. 2013, 863–
E, Triandopoulos N. TRUESET: faster verifiable set computations.
874
98. Gennaro R, Wichs D. Fully homomorphic message authenticators. In:
USENIX Security, 2014, 81(84): 153
114.
Proceedings of International Conference on the Theory and Applica-
100.
102.
HotCloud. 2012
115.
Thaler J. Time-optimal interactive proofs for circuit evaluation. In:
In: Proceedings of the 14th ACM Conference on Computer and Com-
Canetti R, Garay J A, eds. Advances in Cryptology – CRYPTO 2013.
munications Security. 2007, 584–597
Lecture Notes in Computer Science, Vol 8043. Berlin: Springer, 2013,
Fiore D, Gennaro R. Publicly verifiable delegation of large polynomials and matrix computations, with applications. In: Proceedings of the
101.
Thaler J, Roberts M, Mitzenmacher M, Pfister H. Verifiable computation with massively parallel interactive proofs. In: Proceedings of
tion of Cryptology and Information Security. 2013, 301–320
99. Juels A, Kaliski Jr B S. PORs: proofs of retrievability for large files.
Kosba A E, Papadopoulos D, Papamanthou C, Sayed M F, Shi
71–89
116.
Wang L C, LI J, Ahmad H. Challenges of fully homomorphic encryp-
2012 ACM Conference on Computer and Communications Security.
tions for the Internet of things. IEICE Transactions on Information
2012, 501–512
and Systems, 2016, 99(8): 1982–1990
Schröder D, Schröder H. Verifiable data streaming. In: Proceedings
117.
Hong H, Wang L, Ahmad H, Yang Y, Qu Z. Minimum length key
of the 2012 ACM Conference on Computer and Communications Se-
in MST cryptosystems. Science China Information Sciences, 2017,
curity. 2012, 953–964
60(5): 05210
Krupp J, Schröder D, Simkin M, Fiore D, Ateniese G, Nuernberger
S. Nearly optimal verifiable data streaming. In: Cheng C M, Chung
Haseeb Ahmad received the BS degree in
K M, Persiano G, et al., eds. Public-Key Cryptography – PKC 2016.
103.
104.
Lecture Notes in Computer Science, Vol 9614. Berlin: Springer, 2016
mathematics from G.C. University, Faisal-
Blanton M, Zhang Y, Frikken K B. Secure and verifiable outsourcing
abad, Pakistan in 2010, and the master
of large-scale biometric computations. ACM Transactions on Infor-
degree in computer science from Virtual
mation and System Security, 2013, 16(3): 11
University, Pakistan in 2012. He is cur-
Li J, Jia C, Li J, Chen X. Outsourcing encryption of attribute-based
rently a PhD student in School of Com-
encryption with MapReduce. In: Proceedings of International Confer-
puter Science at Beijing University of Posts
ence on Information and Communications Security. 2012, 191–201
105.
and Telecommunications, China. His cur-
Sahai A, Seyalioglu H, Waters B. Dynamic credentials and cipher text
delegation for attribute-based encryption. In: Safavi-Naini R, Canetti
rent research interest includes information security.
R, eds. Advances in Cryptology – CRYPTO 2012. Lecture Notes in
Computer Science, Vol 7417. Berlin: Springer, 2012, 199–217
106.
Licheng Wang received the BS degree from
Li J, Huang X, Li J, Chen X, Xiang Y. Securely outsourcing attribute
Northwest Normal University, China in
based encryption with checkability. IEEE Transactions on Parallel and
107.
108.
Distributed Systems, 2014, 25(8): 2201–2210
1995, the MS degree from Nanjing Uni-
Li J, Li X, Wang L, He D, Ahmad H, Niu X. Fuzzy encryption in
versity, China in 2001, and the PhD degree
cloud computation: efficient verifiable outsourced attribute-based en-
from Shanghai Jiao Tong University, China
cryption. Soft Computing. 2017, 1–8
in 2007. He is an associate professor in Bei-
Carter H, Mood B, Traynor P, Butler K. Secure outsourced garbled
jing University of Posts and Telecommuni-
circuit evaluation for mobile devices. Journal of Computer Security,
cations, China. His current research inter-
2016, 24(2): 137–180
109.
Carter H, Lever C, Traynor P. Whitewash: outsourcing garbled circuit
ests include modern cryptography, network security, trust manage-
generation for mobile devices. In: Proceedings of the 30th Annual
ment, etc.
28
Front. Comput. Sci.
Haibo Hong received the BS degree from
Manzoor Ahmed is currently working as
Fuyang Normal University, China in 2008,
postdoc candidate at Department of Elec-
the MS degree from Capital Normal Uni-
tronic Engineering, Tsinghua University,
versity, China in 2011, and the PhD de-
China. He has completed the PhD degree
gree from Beijing University of Posts and
from the Beijing University of Posts and
Telecommunications, China in 2015. He is
Telecommunications, China in 2015. He
now a lecturer in Zhejiang Gongshang Uni-
received the M.Phil and BE degree from
versity, China. His current research inter-
Pakistan. His research interests include the
ests include modern cryptography, network security etc.
non-cooperative and cooperative game theoretic based resource
management in hierarchical heterogeneous networks, interference
Jing Li received the BS degree from Inner
management in small cell networks, and 5G networks, physical
Mongol Normal University, China in 2010,
layer security and information security.
and the MS degree from Shanxi Normal
University, China in 2013. She is a now
Yixian Yang is a professor of Computer
a PhD candidate studying in Beijing Uni-
Science and Technology at Beijing Uni-
versity of Posts and Telecommunications,
versity of Posts and Telecommunications
China. Her current research interests in-
(BUPT), China and also the director of the
clude modern cryptography, network secu-
National Engineering Laboratory for Dis-
rity, finite field and its applications, etc.
aster Backup and Recovery of China. He
is a fellow of China Institute of Commu-
Hassan Dawood is working as an assistant
nications (CIC), and a council member of
professor at Department of Computer En-
Chinese Institute of Electronics (CIE) and Chinese Association for
gineering, University of Engineering and
Cryptologic Research (CACR). He is the editor in chief of Journal
Technology, Pakistan. He has received his
on Communications of China. He received his MS degree in applied
MS and PhD degrees in computer applica-
mathematics and PhD degree in signal and information processing
tion technology from Beijing Normal Uni-
from BUPT in 1986 and 1988, respectively. His research interests
versity, China in 2012 and 2015, respec-
include coding theory and cryptography, information security and
tively. His research interests include image
network security, disaster backup and recovery, signal and informa-
restoration, feature extraction and image classification.
tion processing, etc.