2014 Fpfe
2014 Fpfe
2014 Fpfe
Abstract
Functional encryption supports restricted decryption keys that allow users to learn specific
functions of the encrypted messages. Although the vast majority of research on functional en-
cryption has so far focused on the privacy of the encrypted messages, in many realistic scenarios
it is crucial to offer privacy also for the functions for which decryption keys are provided.
Whereas function privacy is inherently limited in the public-key setting, in the private-key
setting it has a tremendous potential. Specifically, one can hope to construct schemes where
encryptions of messages m1 , . . . , mT together with decryption keys corresponding to functions
f1 , . . . , fT , reveal essentially no information other than the values {fi (mj )}i,j∈[T ] . Despite its
great potential, the known function-private private-key schemes either support rather limited
families of functions (such as inner products), or offer somewhat weak notions of function privacy.
We present a generic transformation that yields a function-private functional encryption
scheme, starting with any non-function-private scheme for a sufficiently rich function class. Our
transformation preserves the message privacy of the underlying scheme, and can be instantiated
using a variety of existing schemes. Plugging in known constructions of functional encryption
schemes, we obtain function-private schemes based either on the Learning with Errors assump-
tion, on obfuscation assumptions, on simple multilinear-maps assumptions, and even on the
existence of any one-way function (offering various trade-offs between security and efficiency).
∗
Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel.
Email: [email protected]. Supported by the Israel Science Foundation (Grant No. 468/14) and by
the Alon Young Faculty Fellowship.
†
School of Computer Science and Engineering, Hebrew University of Jerusalem, Jerusalem 91904, Israel. Email:
[email protected]. Supported by the European Union’s Seventh Framework Programme (FP7) via a Marie Curie
Career Integration Grant, by the Israel Science Foundation (Grant No. 483/13), and by the Israeli Centers of Research
Excellence (I-CORE) Program (Center No. 4/11).
1 Introduction
The most classical cryptographic scenario, dating back thousands of years, consists of two parties
who wish to secretly communicate in the presence of an eavesdropper. This classical scenario has
traditionally led the cryptographic community to view the security provided by encryption schemes
as an all-or-nothing guarantee: The encrypted data can be fully recovered using the decryption
key, but it is completely useless without it. In a wide variety of modern scenarios, however, a
more fine-grained approach is urgently needed. Starting with the seminal notion of identity-based
encryption [Sha84, BF03, Coc01], this need has recently motivated the cryptographic community to
develop a vision of functional encryption [SW08, BSW11, O’N10], allowing tremendous flexibility
when accessing encrypted data.
Functional encryption supports restricted decryption keys that allow users to learn specific
functions of the encrypted data and nothing else. More specifically, in a functional encryption
scheme, a trusted authority holds a master secret key known only to the authority. When the
authority is given the description of some function f as input, it uses its master secret key to
generate a functional key skf associated with the function f . Now, anyone holding the functional
key skf and an encryption of some message m, can compute f (m) but cannot learn any addi-
tional information about the message m. Extensive research has recently been devoted to studying
the security of functional encryption schemes and to constructing such schemes (see, for example,
[SW08, BSW11, O’N10, GVW12, AGV+ 13, BO13, BCP14, DIJ+ 13, GGH+ 13, GKP+ 13a] and the
references therein).
Function privacy in functional encryption. The vast majority of research on functional en-
cryption to date has focused on the privacy of encrypted messages. In various scenarios, however,
one should consider not only privacy for the encrypted messages but also privacy for the functions
for which functional keys are provided. Consider, for example, a user who subscribes to an on-line
storage service for storing her files. For protecting her privacy, the user locally encrypts her files
using a functional encryption scheme prior to uploading them to the service. The user can then
remotely query her data by providing the service with a functional key skf corresponding to any
query f . Without any additional privacy guarantees, the functional key skf may entirely reveal the
user’s query f to the service, which is clearly undesirable whenever the query itself contains sensitive
information.
Scenarios of such flavor have motivated the research of function privacy in functional encryption,
first in the private-key setting by Shen, Shi and Waters [SSW09], and very recently in the public-key
setting by Boneh, Raghunathan and Segev [BRS13a, BRS13b] followed by Agrawal et al. [AAB+ 13].
Intuitively, function privacy requires that functional keys reveal no unnecessary information on their
functionality.
The extent to which function privacy can be satisfied differs dramatically between the settings
of private-key and public-key encryption. Specifically, in the public-key setting, where anyone can
encrypt messages, only a limited form of function privacy can be satisfied. This is because given
a functional key skf and the public key pk of the scheme, a malicious user can learn information
about the function f by evaluating it on any point m of his choosing (by first encrypting m and
then using skf to decrypt f (m)). This process reveals non-trivial information about f and in some
cases may entirely leak the function’s description (unless additional restrictions are imposed, see
[BRS13a, BRS13b, AAB+ 13] for more details). As a result, function-private functional encryption
schemes in the public-key setting are quite restricted, and furthermore such have only been presented
respective to limited function families (e.g., point functions and inner products).
Our work: Function privacy in the private-key setting. In this work, we focus on function
1
privacy in the private-key setting. In this setting, function privacy has significantly more potential
than in the public-key setting, both as a stand-alone feature and as a very useful building block
(see Section 1.2 for subsequent work). Specifically, one can hope to achieve the following notion
of privacy (stated informally): Any user that obtains encryptions of messages m1 , . . . , mT , and
functional keys corresponding to functions f1 , . . . , fT , learns essentially no information other than
the values {fi (mj )}i,j∈[T ] . This is a strong notion of privacy which has great (realistic) potential for
a wide variety of applications.
Despite its great potential, the known function-private private-key schemes either support only
the inner-product functionality for attribute-based encryption [SSW09, AAB+ 13], or offer only some-
what weak notions of function privacy [GKP+ 13a, GGG+ 14]. We refer the reader to Section 1.3 for
a detailed discussion of the known function private schemes. This state of affairs has motivated us
to explore the following fundamental question:
2
for deterministic functions that is sufficiently expressive. Their work follows-up on the work of Goyal
et al. [GJK+ 13] who put forward the notion of functional encryption for randomized functionalities,
and constructed a public-key functional encryption scheme for randomized functionalities based on
the (seemingly significantly stronger) assumption of indistinguishability obfuscation. Ananth et al.
presented a construction of a fully-secure functional encryption scheme from any selectively-secure
functional encryption scheme that is sufficiently expressive (their transformation applies in both the
private-key setting and the public-key setting). Previous constructions of fully-secure schemes were
based on assumptions that seem significantly stronger, such as obfuscation and multilinear maps
assumptions [BCP14, ABG+ 13, Wat14, GGH+ 14].
One of the key insights underlying both of these works is that in the private-key setting, where
encryption is performed honestly by the owner of the master secret key, the power of obfuscation
may not be needed. Instead, they observed that in some cases one can rely on the weaker notion
of function privacy. More specifically, both Komargodski et al. and Ananth et al. showed that any
sufficiently expressive functional encryption scheme may be appropriately utilized via our function-
privacy techniques for implementing some of the proof techniques that were so far implemented
based on obfuscation (including, for example, a variant of the punctured programming approach of
Sahai and Waters [SW14]).
Function privacy. As mentioned above, Shen, Shi and Waters [SSW09] initiated the research
on predicate privacy in attribute-based encryption in the private-key setting. They constructed
a predicate-private inner-product encryption scheme in the private-key setting. Boneh, Raghu-
nathan and Segev [BRS13a, BRS13b] initiated the research on function privacy in the public key
setting. They constructed function-private public-key functional encryption schemes for point func-
tions (equivalently, anonymous IBE) and for subspace membership. Since their work is in the
public-key setting, their framework assumes that the functions come from a distribution of sufficient
entropy, as otherwise it seems that no realistic notion of function privacy can be satisfied.
Agrawal et al. [AAB+ 13] then presented a general framework for function-private functional
encryption both in the private-key setting and in the public-key setting, and explore their plausibility.
Most relevant to our work, they presented the full security notion for function-private functional
encryption in the private-key setting and presented improved constructions for the inner-product
functionality in this model. We note that we refer to their notion of full security as full function
privacy (see Definition 3.2).
Reusable garbled circuits. The related notion of reusable garbled circuits (ruGC) is defined as
follows. Given a secret key, two procedures can be carried out: Garbling a circuit C (which corre-
sponds to generating a function key) and encoding of an input x (which corresponds to an encryption
of a message). Given a garbled C and an encoded input x, it is possible to publicly compute C(x).
The security requirement is that an adversary that chooses C to be garbled and then a sequence of
inputs x1 , . . . , xt to be encoded cannot learn more than C(x1 ), . . . , C(xt ). Security is formalized in a
simulation-based model: The simulator is required to simulate the garbled circuit without knowing
C, and then it is fed with C(xi ) in turn and is required to simulate the encoded inputs. Gold-
wasser et al. [GKP+ 13a, GKP+ 13b] constructed a simulation-secure functional encryption scheme
(without function privacy) and showed how ruGC follows from that primitive1 . The similarity to
function-private functional encryption is apparent, but there are some significant differences. It
1
Additional constructions were presented by Boneh et al. [BGG+ 14] who were able to reduce the garbling overhead
from multiplicative to additive in either the size of the circuit or the size of the encoded input.
3
follows from the result of [AGV+ 13] that ruGC, much like simulation secure functional encryption,
cannot securely support an a-priori unbounded number of circuits, whereas we are able to guarantee
function privacy for any polynomial (a-priori unknown) number of function keys. A very similar
argument shows that the situation where C is chosen after the inputs xi is also impossible in the
context of ruGC (at least under a natural definition of the simulation process), whereas we would like
the inputs and functions to be adaptively introduced in arbitrary order. On the flip side, ruGC pro-
vides simulation based security which seems to be a stronger notion than indistinguishability-based
security achieved by our construction.
Multi-input functional encryption. Goldwasser et al. [GGG+ 14] have recently introduced the
notion of multi-input functional encryption (MIFE) schemes. As the name suggests, MIFE al-
lows functional keys to correspond to multi-input functions which can be evaluated on tuples of
ciphertexts. Slightly simplified, the dream notion of security (specifically, indistinguishability-based
security in the private-key setting, which is most relevant to this work) is that of an adversary that
is allowed to make functional key queries and also message queries containing pairs of messages
(m0 , m1 ), and in response it gets an encryption of mb , where b is a secret bit. We would like the
adversary to not be able to guess b unless it obtained a key to a function that behaves differently
on the m0 ’s and on the m1 ’s. This dream version of security, even just for two inputs, implies
function-private private-key functional encryption: We will use the first input coordinate to encrypt
the description of the function and the second to encrypt the input, and provide a function key for
the universal 2-input function. However, Goldwasser et al. [GGG+ 14] fall short of achieving this
dream version, since their adversary is not allowed to make message queries adaptively. Further-
more, their construction relies on strong notions of obfuscation (indistinguishability obfuscation and
differing input obfuscation), whereas the construction in this paper only relies on private-key func-
tion encryption (which is currently known to be implied by obfuscation, but no reverse derivation is
known and it is quite possible that they can be constructed under milder assumptions – see Section
2.2.1).
4
of the associated functions. Namely, for generating a functional key for a function f , we will
first encrypt the description of f using a symmetric encryption scheme SKE = (SKE.KG, SKE.Enc,
SKE.Dec) to obtain a ciphertext cf ← SKE.Enc(SKE.k, f ), where SKE.k is a key for the scheme
SKE (which does not change throughout the lifespan of the scheme). Then, the key-generation
def
algorithm would provide a functional key for the function Ucf (m, k) = (SKE.Dec(k, cf ))(m) (that is,
the function that first decrypts cf using the candidate key k, and then applies the resulting function
on m). The semantic security of SKE guarantees that the function Ucf hides the description of f , as
long as the key SKE.k is not known. In order to maintain the functionality, the message encryption
algorithm must also change: Rather than encrypting the message m alone using the underlying
functional encryption scheme, we will now encrypt the pair (m, SKE.k). One can verify that the
functionality of the scheme still holds since clearly Ucf (m, SKE.k) = f (m).
One could hope to prove that this construction is function private. Indeed, Goldwasser et al.
[GKP+ 13a] used this exact scheme to construct reusable garbled circuits. This approach by itself,
however, is insufficient for our purposes. On one hand, during the proof of security we would like
to rely on the semantic security of SKE for arguing that the function Ucf hides the description
of f . This implies that the key SKE.k should be kept completely secret. On the other hand, the
functionality of the scheme must be preserved even during the proof of security. Thus, in order to
allow adversaries to use the functional key for the function Ucf , the key SKE.k must be used while
encrypting messages as above. This conflict is the main challenge that our construction overcomes.
We note that also in the construction of reusable garbled circuits of Goldwasser et al. [GKP+ 13a]
this conflict arises. However, they consider only a selective single-function security notion asking
adversaries to specify a challenge function f prior to receiving any encryptions. Within such a
selective framework, the conflict is easily resolved: During the proof of security one can preserve
the functionality by modifying the encryption algorithm to encrypt f (m) instead of encrypting m
itself. Thus, the description of the function f is in fact not needed, but only the value f (m) is
needed for each encrypted message m (note that f (m) is anyway known to the adversary). This
approach, however, seems inherently limited to a selective framework, whereas we would like to
allow adversaries to adaptively query the key-generation and encryption oracles, at any point in
time, and for any polynomial number of queries2 .
Our scheme. To get around the aforementioned obstacle, we show that the Naor-Yung “double
encryption” methodology [NY90] can be adapted to our setting. Instead of encrypting the descrip-
tion of f only once, we encrypt it twice using two independent symmetric keys. For preserving the
functionality of the system, only one out of the two keys will be explicitly needed, and this allows us
to attack the other key. Combined with the message privacy of the underlying functional encryption
scheme, this approach enables us to prove the security of our scheme.
More specifically, the master secret key of our scheme consists of a master secret key msk for
the underlying functional encryption scheme, and two keys, SKE.k and SKE.k′ , for a symmetric-key
CPA-secure scheme. In order to generate a functional key for a function f , we first generate two
symmetric encryptions c ← SKE.Enc(SKE.k, f ) and c′ ← SKE.Enc(SKE.k′ , f ) of the description of
f . Then, we issue a functional key for the function Uc,c′ which is defined as follows on inputs of the
form (m, m′ , k, k ′ ): If k ̸= ⊥ then decrypt c using k for obtaining a function f , and output f (m).
Otherwise, if k = ⊥, then decrypt c′ using k ′ for obtaining a function f ′ , and output f (m′ ). In order
2
The approach of Goldwasser et al. can be extended to deal with any a-priori bounded number of functions, as long
as they are specified in advance (this is done using [GVW12]). In this case, the length of ciphertexts in their scheme
would be linear in the number of functions. This is in fact inherent to their setting, as they consider a simulation-based
notion of security [AGV+ 13]. We consider indistinguishability-based notions of security, and would like to inherit the
(either full or selective) security of the underlying functional encryption scheme.
5
to encrypt a message m, we will use the encryption scheme of the underlying functional encryption
scheme to encrypt (m, ⊥, SKE.k, ⊥) using its master secret key msk. Note that this scheme works
quite similarly to the aforementioned intuitive idea, only it has “placeholders” for elements that will
be used in the proof.
Towards illustrating some of the ideas underlying the proof of security, consider an adversary
that makes just one encryption query (m0 , m1 ), and just one key-generation query (f0 , f1 ) in some
arbitrary order (recall that we require f0 (m0 ) = f1 (m1 )). The view of this adversary consists of
an encryption of mb and a functional key for fb for a uniformly chosen bit b ∈ {0, 1}. The proof
starts by modifying the functional key: Instead of computing c′ ← SKE.Enc(SKE.k′ , fb ) we compute
c′ ← SKE.Enc(SKE.k′ , f1 ). Note that since the key SKE.k′ is in fact not being used, and since the
functionality of the functional key is not hurt (c′ is anyway not used for decryption), the CPA-security
of the symmetric scheme implies that this goes unnoticed. Next, we modify the encryption algorithm
to encrypt (⊥, m1 , ⊥, SKE.k′ ) instead of (mb , ⊥, SKE.k, ⊥). This time the adversary will not notice
the change due to the message-privacy of the underlying functional encryption scheme, since the new
and old ciphertexts will decrypt to the same value fb (mb ) = f1 (m1 ) under the modified functional
key. Finally, we modify the functional key once again: Instead of computing c ← SKE.Enc(SKE.k, fb )
we compute c ← SKE.Enc(SKE.k, f1 ). As before, since the key SKE.k is in fact not being used, and
since the functionality of the functional key is not hurt, then the CPA-security of the symmetric
scheme implies that this goes unnoticed. At this point, we observe that the view of the adversary is
in fact completely independent of the choice of the bit b, and the result follows. We refer the reader
to Section 4 for the formal description and proof of our scheme.
6
1.6 Paper Organization
The remainder of this paper is organized as follows. In Section 2 we introduce the basic notation
and tools underlying our construction. In Section 3 we introduce the notions of function privacy
that are considered in this work. In Section 4 we present and prove the security of our generic
construction of a function-private scheme.
2 Preliminaries
In this section we present the notation and basic definitions that are used in this work. For a
distribution X we denote by x ← X the process of sampling a value x from the distribution X.
Similarly, for a set X we denote by x ← X the process of sampling a value x from the uniform
distribution over X . For an integer n ∈ N we denote by [n] the set {1, . . . , n}. A real function over
the natural numbers is negligible if it vanishes faster than the inverse of any polynomial.
We rely on the following standard notion of a left-or-right oracle when formalizing the security
of encryption schemes:
Definition 2.1 (Left-or-right oracle). Let O be a probabilistic two-input functionality. For each
def
b ∈ {0, 1} we denote by Ob the probabilistic three-input functionality Ob (k, x0 , x1 ) = O(k, xb ).
Definition 2.2 (CPA security). A private-key encryption scheme Π = (KG, Enc, Dec) is CPA-secure
if for any probabilistic polynomial-time adversary A, there exists a negligible function ν(λ) such that
[ ] [ ]
def
Π,A (λ) = Pr A
AdvCPA (λ) = 1 − Pr AEnc1 (k,·,·) (λ) = 1 ≤ ν(λ),
Enc0 (k,·,·)
where the probability is taken over the choice of k ← KG(1λ ) and over the randomness of A.
7
A private-key functional encryption scheme over a message space M and a function space F
is a quadruple (Setup, KG, Enc, Dec) of probabilistic polynomial-time algorithms. The setup al-
gorithm Setup takes as input the unary representation 1λ of the security parameter λ ∈ N and
outputs a master-secret key msk. The key-generation algorithm KG takes as input a master-secret
key msk and a function f ∈ F, and outputs a functional key skf . The encryption algorithm Enc
takes as input a master-secret key msk and a message m ∈ M, and outputs a ciphertext c. In
terms of correctness we require that for every function f ∈ F and message m ∈ M it holds that
Dec(KG(msk, f ), Enc(msk, m)) = f (m) with all but a negligible probability over the internal random-
ness of the algorithms Setup, KG, Enc, and Dec.
In terms of message privacy we require that encryptions of any two adversarially-chosen messages,
m0 and m1 , are computationally indistinguishable for any adversary that may adaptive obtain
functional keys for any function f ∈ F as long as f (m0 ) = f (m1 ). This is formalized via the
following definitions. Recall (Definition 2.1) that for a private-key functional encryption scheme
Π = (Setup, KG, Enc, Dec) and for any b ∈ {0, 1} we denote by Encb the left-or-right encryption
def
oracle Encb (msk, m0 , m1 ) = Enc(msk, mb ).
Definition 2.4 (Full message privacy). A private-key functional encryption scheme Π = (Setup,
KG, Enc, Dec) over a message space M = {Mλ }λ∈N and a function space F = {Fλ }λ∈N is fully
message private if for any valid message-privacy adversary A, there exists a negligible function ν(λ)
such that
[ ] [ ]
def
AdvMP
Π,A (λ) = Pr AKG(msk,·),Enc0 (msk,·,·)
(λ) = 1 − Pr AKG(msk,·),Enc1 (msk,·,·)
(λ) = 1 ≤ ν(λ),
where the probability is taken over the choice of msk ← Setup(1λ ) and over the randomness of A.
1. msk ← Setup(1λ ).
2. ((m0,1 , . . . , m0,T ) , (m1,1 , . . . , m1,T ) , state) ← A(1λ ), where m0,i , m1,i ∈ Mλ for all i ∈ [T ].
3. c∗i ← Enc(msk, mb,i ) for all i ∈ [T ].
4. b′ ← AKG(msk,·) (c∗1 , . . . , c∗T , state), where for each of A’s queries f to KG(msk, ·) it holds that
f (m0,i ) = f (m1,i ) for all i ∈ [T ].
5. Output b′ .
8
Definition 2.6 (Selective-function message privacy). A private-key functional encryption scheme
Π = (Setup, KG, Enc, Dec) over a message space M = {Mλ }λ∈N and a function space F = {Fλ }λ∈N
is T -selective-function message private, where T = T (λ), if for any probabilistic polynomial-time
adversary A there exists a negligible function ν(λ) such that
[ ] [ ]
def
AdvsfMP
Π,A,T (λ) = Pr ExptsfMP
Π,A,T (λ, 0) = 1 − Pr ExptsfMP
Π,A,T (λ, 1) = 1 ≤ ν(λ),
1. msk ← Setup(1λ ).
2. (f1 , . . . , fT , state) ← A(1λ ), where fi ∈ Fλ for all i ∈ [T ].
3. skfi ← KG(msk, fi ) for all i ∈ [T ].
4. b′ ← AEncb (msk,·,·) (skf1 , . . . , skfT , state), where for each of A’s queries (m0 , m1 ) to Encb (msk, ·, ·)
it holds that fi (m0 ) = fi (m1 ) for all i ∈ [T ].
5. Output b′ .
9
3 Modeling Function Privacy in the Private-Key Setting
In this section we introduce the notions of function privacy that are considered in this work. We con-
sider three notions: full function privacy, selective-message function privacy, and selective-function
function privacy. These are indistinguishability-based notions whose goal is to guarantee that func-
tional keys reveal no unnecessary information on their functionality. Specifically, these notions
ask that any efficient adversary that obtains encryptions of messages m1 , . . . , mT , and functional
keys corresponding to functions f1 , . . . , fT , learns essentially no information other than the values
{fi (mj )}i,j∈[T ] . Our notions generalize the standard message-privacy notions for functional encryp-
tion (see Section 2.2) by taking into account function privacy in addition to message privacy.
Full function privacy. The strongest notion that we consider, which we refer to as full function
privacy, was recently put forward by Agrawal et al. [AAB+ 13] who generalized the notion of Shen,
Shi and Waters [SSW09] for predicate-privacy in attribute-based encryption. This notion consid-
ers both privacy of functional keys and privacy of encrypted messages in a completely symmetric
manner. Specifically, we consider adversaries that interact with a left-or-right key-generation oracle
KGb (msk, ·, ·), and with a left-or-right encryption oracle Encb (msk, ·, ·) (where both oracles operate
using the same bit b)4 . We allow adversaries to adaptively interact with these oracles for any poly-
nomial number of queries (which does not have to be bounded in advance), and the adversaries’
goal is to distinguish the cases b = 0 and b = 1. Our only requirement from such adversaries is that
for all (f0 , f1 ) and (m0 , m1 ) with which they query the oracles KGb and Encb , respectively, it holds
that f0 (m0 ) = f1 (m1 ). We note that this is clearly an inherent requirement.
Definition 3.2 (Full function privacy). A private-key functional encryption scheme Π = (Setup,
KG, Enc, Dec) over a message space M = {Mλ }λ∈N and a function space F = {Fλ }λ∈N is fully
function private if for any valid function-privacy adversary A, there exists a negligible function ν(λ)
such that
[ ] [ ]
def
AdvFP
Π,A (λ) = Pr A KG0 (msk,·,·),Enc0 (msk,·,·)
(λ) = 1 − Pr AKG1 (msk,·,·),Enc1 (msk,·,·)
(λ) = 1 ≤ ν(λ),
where the probability is taken over the choice of msk ← Setup(1λ ) and over the randomness of A.
Selective notions. We consider two relaxations of our notion of full function privacy from Defini-
tion 3.2. The first, which we refer to as selective-message function privacy restricts the access that
adversaries have to the left-or-right encryption oracle. Specifically, this notion asks that adversaries
choose in advance their set of encryption queries. We note that adversaries are still given oracle
access to the left-or-right key-generation oracle, with which they can interact in an adaptive manner
for any polynomial number of queries. The second, which we refer to as selective-function function
4
Recall (Definition 2.1) that for a probabilistic two-input functionality O and for b ∈ {0, 1}, we denote by Ob the
def
probabilistic three-input functionality Ob (k, x0 , x1 ) = O(k, xb ).
10
privacy restricts the access that adversaries have to the left-or-right key-generation oracle. Specif-
ically, this notion asks that adversaries choose in advance their set of key-generation queries. We
note that adversaries are still given oracle access to the left-or-right encryption oracle, with which
they can interact in an adaptive manner for any polynomial number of queries. In addition, we note
that our definition of a valid function-privacy adversary (Definition 3.1) naturally extends to the
selective setting.
1. msk ← Setup(1λ ).
2. ((m0,1 , . . . , m0,T ) , (m1,1 , . . . , m1,T ) , state) ← A(1λ ), where m0,i , m1,i ∈ Mλ for all i ∈ [T ].
3. c∗i ← Enc(msk, mb,i ) for all i ∈ [T ].
4. b′ ← AKGb (msk,·,·) (c∗1 , . . . , c∗T , state), where for each of A’s queries (f0 , f1 ) to KGb (msk, ·, ·) it
holds that f0 (m0,i ) = f1 (m1,i ) for all i ∈ [T ].
5. Output b′ .
1. msk ← Setup(1λ ).
2. ((f0,1 , . . . , f0,T ) , (f1,1 , . . . , f1,T ) , state) ← A(1λ ), where f0,i , f1,i ∈ Fλ for all i ∈ [T ].
3. sk∗i ← KG(msk, fb,i ) for all i ∈ [T ].
4. b′ ← AEncb (msk,·,·) (sk∗1 , . . . , sk∗T , state), where for each of A’s queries (m0 , m1 ) to Encb (msk, ·, ·)
it holds that f0,i (m0 ) = f1,i (m1 ) for all i ∈ [T ].
5. Output b′ .
Finally, we observe that due to the symmetry between the roles of the encryption oracle and
the key-generation oracle in these two selective notions, they are in fact equivalent when switching
between the encryption algorithm and key-generation algorithm of any given scheme. That is, a
private-key functional encryption scheme (Setup, KG, Enc, Dec) is selective-message function private
11
if and only if the scheme (Setup, Enc, KG, Dec) is selective-function function private. To be a little
more accurate, replacing the roles of functions f and message m may require some standard “type
casting” to represent a message as function and function as message. This is done using universal
machines: To cast a function f as a message, we consider its description as the message to be
encrypted. This means that if Enc only takes bounded length messages, then the new scheme will
only support functions with bounded description lengths. To cast a message m as a function, we
consider a universal function that accepts a description of a function f and outputs f (m). Again,
depending on the computational model under consideration, this may impose some restrictions. For
example, if working over circuits then the universal circuit imposes an upper bound on the size of
the functions supported by the new scheme, whereas that may not have been required in the original
scheme before the switch (however, in this example, if the function size was a-priori unbounded in the
original scheme, then the message space after the switch will become a-priori length unbounded).
In this section we present our generic construction of a function-private private-key functional en-
cryption scheme. Our construction relies on the following two building blocks:
• A private-key functional encryption scheme FE = (FE.Setup, FE.KG, FE.Enc, FE.Dec).
• A private-key encryption scheme SKE = (SKE.KG, SKE.Enc, SKE.Dec).5
Our new functional encryption scheme FPE = (Setup, KG, Enc, Dec) is defined as follows.
The setup algorithm. On input the security parameter 1λ the setup algorithm Setup samples
FE.msk ← FE.Setup(1λ ), SKE.k ← SKE.KG(1λ ), and SKE.k′ ← SKE.KG(1λ ). Then, it outputs
msk = (FE.msk, SKE.k, SKE.k′ ).
The key-generation algorithm. On input the master secret key msk = (FE.msk, SKE.k, SKE.k′ )
and a function f , the key-generation algorithm KG first computes c ← SKE.Enc(SKE.k, f ) and
c′ ← SKE.Enc(SKE.k′ , f ). Then, it computes FE.SKUc,c′ ← FE.KG(FE.msk, Uc,c′ ), where the function
Uc,c′ is described in Figure 1. Finally, it outputs skf = FE.SKUc,c′ .
The encryption algorithm. On input the master secret key msk = (FE.msk, SKE.k, SKE.k′ ) and
a message m, the encryption algorithm Enc outputs c ← FE.Enc(FE.msk, (m, ⊥, SKE.k, ⊥)).
The decryption algorithm. On input a functional key skf and a ciphertext c, the decryption
algorithm Dec outputs FE.Dec(skf , c).
Uc,c′ (m, m′ , k, k ′ )
Note that if the underlying scheme FE supports functions that are computable by circuits of
size at most s, for some sufficiently large polynomial s = s(n), then our new scheme FPE supports
functions that are computable by circuits of size Ω(s). Specifically, a functional key skf for a function
5
To be absolutely formal, this building block is implied by the former in an obvious way.
12
f in the new scheme consists of a functional key for the function Uc,c′ in the underlying scheme. The
function Uc,c′ is computable in a straightforward manner by a circuit that contains two copies of a
circuit for computing f , and two copies of SKE’s decryption circuit. The security of our construction
is captured by the following theorem:
Theorem 4.1. Assuming that the scheme SKE is CPA-secure the following hold:
1. If the scheme FE is fully message private then the scheme FPE is fully function private.
2. If the scheme FE is selective-message message private (resp. T -selective-message message
private) then the scheme FPE is selective-message function private (resp. T -selective-message
function private).
3. If the scheme FE is selective-function message private (resp. T -selective-function message
private) then the scheme FPE is selective-function function private (resp. T -selective-function
function private).
As discussed in Section 2.2.1, Theorem 4.1 can be instantiated based on a variety of known
functional encryption schemes (e.g., [GVW12, ABG+ 13, GGH+ 13, GKP+ 13a, BCP14]) that offer
full message privacy, selective-message message privacy, and selective-function message privacy.
Proof of Theorem 4.1. For concreteness we first prove the theorem for the case where the scheme
FE is fully message private, and then explain the minor adjustments that are needed for the case
where the scheme FE is selectively message private. Let A be a valid function-privacy adversary for
the scheme FPE (recall Definition 3.1). We prove that there exist a probabilistic-polynomial time
adversary B1 attacking the CPA security of SKE, and a probabilistic polynomial-time adversary B2
attacking the message privacy of FE, such that
( )
FPE,A (λ) ≤ 2 · AdvSKE,B1 (λ) + AdvF E,B2 (λ) .
AdvFP CPA MP
We present a sequence of five hybrid experiments, denoted H(0) , . . . , H(4) , and prove that each two
consecutive experiments are computationally indistinguishable from A’s point of view. Each such
experiment H(i) is completely characterized by its key-generation oracle (denoted KG(i) ) and its
encryption oracle (denoted Enc(i) ). The differences between these experiments are summarized in
Table 1.
Table 1: The differences between the experiments H(0) , . . . , H(4) . Adjacent experiments that differ on the
generation of c or c′ are proven indistinguishable using the CPA security of SKE. Adjacent experiments that
differ on the input to FE.Enc are proven indistinguishable using the message privacy of FE.
Experiment H(0) (λ). This experiment is the experiment AKG0 (msk,·,·),Enc0 (msk,·,·) (λ) where msk ←
Setup(1λ ) (see Definition 3.2). That is, we let KG(0) = KG0 and Enc(0) = Enc0 .
13
Experiment H(1) (λ). This experiment is obtained from the experiment H(0) (λ) by considering
a modified key-generation oracle KG(1) (msk, ·, ·) that is defined as follows: On input (f0 , f1 ) it first
computes c ← SKE.Enc(SKE.k, f0 ) and c′ ← SKE.Enc(SKE.k′ , f1 ). Then, it computes FE.SKUc,c′ ←
FE.KG(FE.msk, Uc,c′ ), where the function Uc,c′ is described in Figure 1. Finally, it outputs skf0 =
FE.SKUc,c′ . The encryption oracle is not modified (i.e., Enc(1) = Enc(0) = Enc0 ).
The above claim states that the CPA security of the scheme SKE implies that the experiments
H(0) and H(1) are computationally indistinguishable. Specifically, the difference between these
experiments is that in H(0) the key-generation oracle computes c′ ← SKE.Enc(SKE.k′ , f0 ) whereas
in H(1) it computes c′ ← SKE.Enc(SKE.k′ , f1 ). The claim then follows from the fact that the key
SKE.k′ is not being used for any other purpose in these experiments. We the refer the reader to
Appendix A for the complete proof of Claim 4.2.
Experiment H(2) (λ). This experiment is obtained from the experiment H(1) (λ) by considering a
modified encryption oracle Enc(2) (msk, ·, ·) that is defined as follows: On input (m0 , m1 ) it outputs
c ← FE.Enc(FE.msk, (⊥, m1 , ⊥, SKE.k′ ) ). The key-generation oracle is not modified (i.e., KG(2) =
KG(1) ).
The above claim states that the message privacy of the scheme FE implies that the experiments
H(1) and H(2) are computationally indistinguishable. Specifically, the difference between these
experiments is that in H(1) the encryption oracle computes c ← FE.Enc(FE.msk, (m0 , ⊥, SKE.k, ⊥) )
whereas in H(2) it computes c ← FE.Enc(FE.msk, (⊥, m1 , ⊥, SKE.k′ ) ). Note that for each functional
key FE.SKUc,c′ that is produced by the key-generation algorithm in these experiments it holds that
c ← SKE.Enc(SKE.k, f0 ) and c′ ← SKE.Enc(SKE.k′ , f1 ). Therefore, by the fact that A is a valid
function-privacy adversary we know that f0 (m0 ) = f1 (m1 ) and therefore
We the refer the reader to Appendix A for the complete proof of Claim 4.3.
Experiment H(3) (λ). This experiment is obtained from the experiment H(2) (λ) by considering
a modified key-generation oracle KG(3) (msk, ·, ·) that is defined as follows: On input (f0 , f1 ) it first
computes c ← SKE.Enc(SKE.k, f1 ) and c′ ← SKE.Enc(SKE.k′ , f1 ). Then, it computes FE.SKUc,c′ ←
FE.KG(FE.msk, Uc,c′ ), where the function Uc,c′ is described in Figure 1. Finally, it outputs skf1 =
FE.SKUc,c′ . The encryption oracle is not modified (i.e., Enc(3) = Enc(2) ).
14
The proof of the above claim is essentially identical to the proof of Claim 4.2. Specifically, the
difference between experiments H(2) and H(3) is that in H(2) the key-generation oracle computes
c ← SKE.Enc(SKE.k, f0 ) whereas in H(3) it computes c ← SKE.Enc(SKE.k, f1 ). The claim then
follows from the fact that the key SKE.k is not being used for any other purpose in these experiments.
We the refer the reader to Appendix A for the complete proof of Claim 4.4.
Experiment H(4) (λ). This experiment is obtained from the experiment H(3) (λ) by considering a
modified encryption oracle Enc(4) (msk, ·, ·) that is defined as follows: On input (m0 , m1 ) it outputs
c ← FE.Enc(FE.msk, (m1 , ⊥, SKE.k, ⊥) ). The key-generation oracle is not modified (i.e., KG(4) =
KG(3) ). Note that Enc(4) = Enc1 and KG(4) = KG1 , and therefore this experiment is in fact the
experiment AKG1 (msk,·,·),Enc1 (msk,·,·) (λ).
Claim 4.5. There exists a probabilistic polynomial-time adversary B2 such that
[ (3) (3)
] [ (4) (4)
]
Pr AKG (msk,·,·),Enc (msk,·,·) (λ) = 1 − Pr AKG (msk,·,·),Enc (msk,·,·) (λ) = 1 ≤ AdvMP
F E,B2 (λ).
The proof of the above claim is essentially identical to the proof of Claim 4.3 (it is in fact some-
what simpler). Specifically, the difference between experiments H(3) and H(4) is that in H(3) the
encryption oracle computes c ← FE.Enc(FE.msk, (⊥, m1 , ⊥, SKE.k′ ) ) whereas in H(4) it computes
c ← FE.Enc(FE.msk, (m1 , ⊥, SKE.k, ⊥) ). Note that for each functional key FE.SKUc,c′ that is pro-
duced by the key-generation algorithm in these experiments it holds that c ← SKE.Enc(SKE.k, f1 )
and c′ ← SKE.Enc(SKE.k′ , f1 ), and therefore
We refer the reader to Appendix A for the complete proof of Claim 4.5.
We conclude the proof by observing that Claims 4.2–4.5 imply that there exist polynomial-time
adversaries B1 and B2 such that
[ ] [ ]
def
AdvFP
Π,A (λ) = Pr AKG0 (msk,·,·),Enc0 (msk,·,·)
(λ) = 1 − Pr AKG1 (msk,·,·),Enc1 (msk,·,·)
(λ) = 1
( )
≤ 2 · AdvCPA
SKE,B1 (λ) + Adv MP
FE,B2 (λ) .
This settles the proof of the theorem for the case where the scheme FE is message private,
showing that the scheme FPE is function private. Finally, in the case where the scheme FE is
selectively-message private, an almost identical proof shows that the scheme FPE offers the same
flavor of selective function private. The only difference is that the modifications to the left-or-right
encryption and key-generation oracles in our hybrids should be applied at the beginning of the
experiments (depending on the flavor of the selective notion).
Acknowledgments
We thank Shweta Agrawal and Vinod Vaikuntanthan for insightful discussions.
References
15
[ABG+ 13] P. Ananth, D. Boneh, S. Garg, A. Sahai, and M. Zhandry. Differing-inputs obfuscation
and applications. Cryptology ePrint Archive, Report 2013/689, 2013.
[ABS+ 14] P. Ananth, Z. Brakerski, G. Segev, and V. Vaikuntanathan. The trojan method in
functional encryption: From selective to adaptive security, generically. Cryptology
ePrint Archive, Report 2014/917, 2014.
[BF03] D. Boneh and M. K. Franklin. Identity-based encryption from the Weil pairing. SIAM
Journal on Computing, 32(3):586–615, 2003. Preliminary version in Advances in Cryp-
tology – CRYPTO ’01, pages 213–229, 2001.
[BSW11] D. Boneh, A. Sahai, and B. Waters. Functional encryption: Definitions and challenges.
In Proceedings of the 8th Theory of Cryptography Conference, pages 253–273, 2011.
[Coc01] C. Cocks. An identity based encryption scheme based on quadratic residues. In Pro-
ceedings of the 8th IMA International Conference on Cryptography and Coding, pages
360–363, 2001.
[DIJ+ 13] A. De Caro, V. Iovino, A. Jain, A. O’Neill, O. Paneth, and G. Persiano. On the
achievability of simulation-based security for functional encryption. In Advances in
Cryptology – CRYPTO ’13, pages 519–535, 2013.
[GGG+ 14] S. Goldwasser, S. D. Gordon, V. Goyal, A. Jain, J. Katz, F.-H. Liu, A. Sahai, E. Shi,
and H.-S. Zhou. Multi-input functional encryption. In Advances in Cryptology – EU-
ROCRYPT ’14, pages 578–602, 2014. Merge of [GGJ+ 13] and [GKL+ 13].
16
[GGH+ 13] S. Garg, C. Gentry, S. Halevi, M. Raykova, A. Sahai, and B. Waters. Candidate
indistinguishability obfuscation and functional encryption for all circuits. In Proceedings
of the 54th Annual IEEE Symposium on Foundations of Computer Science, pages 40–
49, 2013.
[GGH+ 14] S. Garg, C. Gentry, S. Halevi, and M. Zhandry. Fully secure functional encryption
without obfuscation. Cryptology ePrint Archive, Report 2014/666, 2014.
[GGJ+ 13] S. Goldwasser, V. Goyal, A. Jain, and A. Sahai. Multi-input functional encryption.
Cryptology ePrint Archive, Report 2013/727, 2013.
[GJK+ 13] V. Goyal, A. Jain, V. Koppula, and A. Sahai. Functional encryption for randomized
functionalities. Cryptology ePrint Archive, Report 2013/729, 2013.
[GKL+ 13] S. D. Gordon, J. Katz, F.-H. Liu, E. Shi, and H.-S. Zhou. Multi-input functional
encryption. Cryptology ePrint Archive, Report 2013/774, 2013.
[GKP+ 13a] S. Goldwasser, Y. Kalai, R. A. Popa, V. Vaikuntanathan, and N. Zeldovich. Reusable
garbled circuits and succinct functional encryption. In Proceedings of the 45th Annual
ACM Symposium on Theory of Computing, pages 555–564, 2013.
[GKP+ 13b] S. Goldwasser, Y. T. Kalai, R. A. Popa, V. Vaikuntanathan, and N. Zeldovich. How
to run turing machines on encrypted data. In Advances in Cryptology – CRYPTO ’13,
pages 536–553, 2013.
[GVW12] S. Gorbunov, V. Vaikuntanathan, and H. Wee. Functional encryption with bounded
collusions via multi-party computation. In Advances in Cryptology – CRYPTO ’12,
pages 162–179, 2012.
[KSY15] I. Komargodski, G. Segev, and E. Yogev. Functional encryption for randomized func-
tionalities in the private-key setting from minimal assumptions. To appear in Proceed-
ings of the 12th Theory of Cryptography Conference, 2015.
[NY90] M. Naor and M. Yung. Public-key cryptosystems provably secure against chosen ci-
phertext attacks. In Proceedings of the 22nd Annual ACM Symposium on Theory of
Computing, pages 427–437, 1990.
[O’N10] A. O’Neill. Definitional issues in functional encryption. Cryptology ePrint Archive,
Report 2010/556, 2010.
[Sha84] A. Shamir. Identity-based cryptosystems and signature schemes. In Advances in Cryp-
tology – CRYPTO ’84, pages 47–53, 1984.
[SSW09] E. Shen, E. Shi, and B. Waters. Predicate privacy in encryption systems. In Proceedings
of the 6th Theory of Cryptography Conference, pages 457–473, 2009.
[SW08] A. Sahai and B. Waters. Slides on functional encryption. Available at http://www.
cs.utexas.edu/~bwaters/presentations/files/functional.ppt, 2008.
[SW14] A. Sahai and B. Waters. How to use indistinguishability obfuscation: deniable en-
cryption, and more. In Proceedings of the 46th Annual ACM Symposium on Theory of
Computing, pages 475–484, 2014.
[Wat14] B. Waters. A punctured programming approach to adaptively secure functional en-
cryption. Cryptology ePrint Archive, Report 2014/588, 2014.
17
A Proofs of Claims 4.2–4.5
In this section we provide the complete proofs of Claims 4.2–4.5 from Section 4.
Proof of Claim 4.2. Consider the following adversary B1 that is given as input the security pa-
rameter 1λ , and has access to the oracle SKE.Encb (SKE.k′ , ·, ·), where b ∈ {0, 1} and SKE.k′ ←
SKE.KG(1λ ). The adversary B1 first samples FE.msk ← FE.Setup(1λ ) and SKE.k ← SKE.KG(1λ ),
and then invokes the adversary A on the security parameter 1λ . Next, B1 simulates to A the
left-or-right key-generation and encryption oracles of the scheme FPE as follows:
• Simulating the encryption oracle. On input (m0 , m1 ), the adversary B1 outputs c ←
FE.Enc(FE.msk, (m0 , ⊥, SKE.k, ⊥)). Note that B1 knows the keys FE.msk and SKE.k.
• Simulating the key-generation oracle. On input (f0 , f1 ), the adversary B1 first computes
c ← SKE.Enc(SKE.k, f ) (note that B1 knows the key SKE.k). Then, it queries its own encryp-
tion oracle SKE.Encb (SKE.k′ , ·, ·) with (f0 , f1 ) for obtaining a ciphertext c′ . Next, B1 computes
and outputs FE.SKUc,c′ ← FE.KG(FE.msk, Uc,c′ ) (note that B1 knows the key FE.msk).
Finally, B1 outputs the output of A. We observe that if b = 0 then B1 provides a perfect simulation
of the oracles Enc(0) and KG(0) , and if b = 1 then B1 provides a perfect simulation of the oracles
Enc(1) and KG(1) . Therefore,
[ ] [ ]
def SKE.Enc0 (SKE.k′ ,·,·) SKE.Enc1 (SKE.k′ ,·,·)
SKE,B1 (λ) = Pr B1
AdvCPA (λ) = 1 − Pr B1 (λ) = 1
[ (0) (0)
] [ (1) (1)
]
= Pr AKG (msk,·,·),Enc (msk,·,·) (λ) = 1 − Pr AKG (msk,·,·),Enc (msk,·,·) (λ) = 1 .
Proof of Claim 4.3. Consider the following adversary B2 that is given as input the security
parameter 1λ , and has access to the oracles FE.Encb (FE.msk, ·, ·) and FE.KG(FE.msk, ·), where
b ∈ {0, 1} and FE.msk ← FE.Setup(1λ ). The adversary B2 first samples SKE.k ← SKE.KG(1λ )
and SKE.k′ ← SKE.KG(1λ ), and then invokes the adversary A on the security parameter 1λ . Next,
B2 simulates to A the left-or-right key-generation and encryption oracles of the scheme FPE as
follows:
• Simulating the encryption oracle. On input (m0 , m1 ), the adversary B2 queries its own en-
cryption oracle FE.Encb (FE.msk, ·, ·) with ((m0 , ⊥, SKE.k, ⊥), (⊥, m1 , ⊥, SKE.k′ )) for obtaining
a ciphertext c. It then outputs c.
• Simulating the key-generation oracle. On input (f0 , f1 ), the adversary B2 first computes
c ← SKE.Enc(SKE.k, f0 ) and c′ ← SKE.Enc(SKE.k′ , f1 ). Next, it queries its own key-generation
oracle FE.KG(FE.msk, ·) with Uc,c′ for obtaining a functional key FE.SKUc,c′ . It then outputs
FE.SKUc,c′ .
Finally, B2 outputs the output of A. We observe that if b = 0 then B2 provides a perfect simulation
of the oracles Enc(1) and KG(1) , and if b = 1 then B2 provides a perfect simulation of the oracles
Enc(2) and KG(2) . Therefore,
[ ]
def FE.KG(FE.msk,·),FE.Enc0 (FE.msk,·,·)
AdvMP
FE,B2 (λ) = Pr B 2 (λ) = 1
[ ]
FE.KG(FE.msk,·),FE.Enc1 (FE.msk,·,·)
− Pr B2 (λ) = 1
[ (1) (1)
] [ (2) (2)
]
= Pr AKG (msk,·,·),Enc (msk,·,·) (λ) = 1 − Pr AKG (msk,·,·),Enc (msk,·,·) (λ) = 1 .
18
Therefore, it only remains to prove that B2 is a valid-message privacy adversary (see Definition
2.3). That is, it remains to prove that for all functions Uc,c′ with which it queries its key-generation
oracle, and for all pairs of messages ((m0 , ⊥, SKE.k, ⊥), (⊥, m1 , ⊥, SKE.k′ )) with which it queries
its left-or-right encryption oracle it holds that Uc,c′ (m0 , ⊥, SKE.k, ⊥) = Uc,c′ (⊥, m1 , ⊥, SKE.k′ ). By
the definition of B2 for each such function Uc,c′ it holds that c ← SKE.Enc(SKE.k, f0 ) and c′ ←
SKE.Enc(SKE.k′ , f1 ), where (f0 , f1 ) is a query made by A to the left-or-right key-generation oracle
of the scheme FPE. The fact that A is a valid function-privacy adversary (see Definition 3.1)
guarantees that f0 (m0 ) = f1 (m1 ), and therefore by the definition of the function Uc,c′ and the
perfect correctness of SKE, it holds that
Uc,c′ (m0 , ⊥, SKE.k, ⊥) = f0 (m0 ) = f1 (m1 ) = Uc,c′ (⊥, m1 , ⊥, SKE.k′ ).
Proof of Claim 4.4. Consider the following adversary B1 that is given as input the security pa-
rameter 1λ , and has access to the oracle SKE.Encb (SKE.k, ·, ·), where b ∈ {0, 1} and SKE.k ←
SKE.KG(1λ ). The adversary B1 first samples FE.msk ← FE.Setup(1λ ) and SKE.k′ ← SKE.KG(1λ ),
and then invokes the adversary A on the security parameter 1λ . Next, B1 simulates to A the
left-or-right key-generation and encryption oracles of the scheme FPE as follows:
• Simulating the encryption oracle. On input (m0 , m1 ), the adversary B1 outputs c ←
FE.Enc(FE.msk, (⊥, m1 , ⊥, SKE.k′ )). Note that B1 knows the keys FE.msk and SKE.k′ .
• Simulating the key-generation oracle. On input (f0 , f1 ), the adversary B1 first queries its
own encryption oracle SKE.Encb (SKE.k, ·, ·) with (f0 , f1 ) for obtaining a ciphertext c. Then, it
computes c′ ← SKE.Enc(SKE.k′ , f1 ) (note that B1 knows the key SKE.k′ ). Next, B1 computes
and outputs FE.SKUc,c′ ← FE.KG(FE.msk, Uc,c′ ) (note that B1 knows the key FE.msk).
Finally, B1 outputs the output of A. We observe that if b = 0 then B1 provides a perfect simulation
of the oracles Enc(2) and KG(2) , and if b = 1 then B1 provides a perfect simulation of the oracles
Enc(3) and KG(3) . Therefore,
[ ] [ ]
def SKE.Enc0 (SKE.k,·,·) SKE.Enc1 (SKE.k,·,·)
SKE,B1 (λ) = Pr B1
AdvCPA (λ) = 1 − Pr B1 (λ) = 1
[ (2) (2)
] [ (3) (3)
]
= Pr AKG (msk,·,·),Enc (msk,·,·) (λ) = 1 − Pr AKG (msk,·,·),Enc (msk,·,·) (λ) = 1 .
Proof of Claim 4.5. Consider the following adversary B2 that is given as input the security
parameter 1λ , and has access to the oracles FE.Encb (FE.msk, ·, ·) and FE.KG(FE.msk, ·), where
b ∈ {0, 1} and FE.msk ← FE.Setup(1λ ). The adversary B2 first samples SKE.k ← SKE.KG(1λ )
and SKE.k′ ← SKE.KG(1λ ), and then invokes the adversary A on the security parameter 1λ . Next,
B2 simulates to A the left-or-right key-generation and encryption oracles of the scheme FPE as
follows:
• Simulating the encryption oracle. On input (m0 , m1 ), the adversary B2 queries its own en-
cryption oracle FE.Encb (FE.msk, ·, ·) with ((⊥, m1 , ⊥, SKE.k′ ), (m1 , ⊥, SKE.k, ⊥)) for obtaining
a ciphertext c. It then outputs c.
• Simulating the key-generation oracle. On input (f0 , f1 ), the adversary B2 first computes
c ← SKE.Enc(SKE.k, f1 ) and c′ ← SKE.Enc(SKE.k′ , f1 ). Next, it queries its own key-generation
oracle FE.KG(FE.msk, ·) with Uc,c′ for obtaining a functional key FE.SKUc,c′ . It then outputs
FE.SKUc,c′ .
19
Finally, B2 outputs the output of A. We observe that if b = 0 then B2 provides a perfect simulation
of the oracles Enc(3) and KG(3) , and if b = 1 then B2 provides a perfect simulation of the oracles
Enc(4) and KG(4) . Therefore,
[ ]
def FE.KG(FE.msk,·),FE.Enc0 (FE.msk,·,·)
FE,B2 (λ) = Pr B2
AdvMP (λ) = 1
[ ]
FE.KG(FE.msk,·),FE.Enc1 (FE.msk,·,·)
− Pr B2 (λ) = 1
[ (3) (3)
] [ (4) (4)
]
= Pr AKG (msk,·,·),Enc (msk,·,·) (λ) = 1 − Pr AKG (msk,·,·),Enc (msk,·,·) (λ) = 1 .
Therefore, it only remains to prove that B2 is a valid-message privacy adversary (see Definition
2.3). That is, it remains to prove that for all functions Uc,c′ with which it queries its key-generation
oracle, and for all pairs of messages ((⊥, m1 , ⊥, SKE.k′ ), (m1 , ⊥, SKE.k, ⊥)) with which it queries
its left-or-right encryption oracle it holds that Uc,c′ (⊥, m1 , ⊥, SKE.k′ ) = Uc,c′ (m1 , ⊥, SKE.k, ⊥). By
the definition of B2 for each such function Uc,c′ it holds that c ← SKE.Enc(SKE.k, f1 ) and c′ ←
SKE.Enc(SKE.k′ , f1 ), where (f0 , f1 ) is a query made by A to the left-or-right key-generation oracle
of the scheme FPE. Therefore, by the definition of the function Uc,c′ it holds that
20