Additivity and Multiplicativity Properties of Some Gaussian Channels For Gaussian Inputs

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

a

r
X
i
v
:
q
u
a
n
t
-
p
h
/
0
5
1
1
0
0
6
v
3



1
3

F
e
b

2
0
0
6
Additivity and multiplicativity properties of some Gaussian channels for Gaussian inputs
Tohya Hiroshima

Quantum Computation and Information Project, ERATO-SORST, Japan Science and Technology Agency,
Daini Hongo White Building 201, Hongo 5-28-3, Bunkyo-ku, Tokyo 113-0033, Japan
(Dated: February 3, 2007)
We prove multiplicativity of maximal output p norm of classical noise channels and thermal noise channels
of arbitrary modes for all p > 1 under the assumption that the input signal states are Gaussian states. As a direct
consequence, we also show the additivity of the minimal output entropy and that of the energy-constrained
Holevo capacity for those Gaussian channels under Gaussian inputs. To the best of our knowledge, newly
discovered majorization relation on symplectic eigenvalues, which is also of independent interest, plays a central
role in the proof.
PACS numbers: 03.67.-a, 42.50.-p, 03.65.Ud
I. INTRODUCTION
One of the goals of quantuminformation theory is to clarify
the ultimate capability of information processing harnessed
by using quantum mechanics [1, 2]. The celebrated Holevo-
Schumacher-Westmoreland theorem [3, 4] gives us a formal
basis to determine the ultimate transmission rate of classical
information encoded in quantum states transmitted through a
quantum channel. Yet, an important question is still unan-
swered in terms of the classical capacity of quantum chan-
nels. It is the additivity question; Do the entangled inputs
over several invocations of quantum channels improve the
classical capacity of quantum channels? Despite many ef-
forts devoted to the additivity problems of quantum channels,
the additivity properties have been proven for a few examples,
such as entanglement breaking channels [5], unital qubit chan-
nels [6], depolarizing channels [7], and contravariant channels
[8]. Surprisingly, the additivity problems of quantumchannels
have been shown to be equivalent to the seemingly unrelated
additivity problems of quantum entanglement, i.e., the addi-
tivity and the strong superadditivity of entanglement of for-
mation [9, 10, 11, 12]. All of them are not completely solved
and are now major concerns in quantum information and the
quantum entanglement theories.
As for continuous-variable quantum systems, in spite of in-
tensive research [13, 14, 15], only lossy channels have been
proven to be additive [16]. The additivity problems may be
much more intractable for continuous-variable quantum sys-
tems. The natural question is, therefore, what can we say
about the additivity properties of Gaussian quantum channels
if we restrict the input signal states to be Gaussian states? This
question has its own signicance. One rationale is that the
Gaussian channels correspond to the so-called Gaussian oper-
ations that can be implemented by current experimental tech-
niques, such as bemsplitters, phase shifters, squeezers, and
homodyne measurements. Another is the mathematical sim-
plicity; Gaussian operations on Gaussian states are completely
characterized by nite dimensional matrices and vectors, al-
though the underlying Hilbert space is innite dimensional.

Electronic address: [email protected]


Due to their mathematical simplicity, the additivity problems
of Gaussian channels under Gaussian inputs provide a poten-
tial rm step towards answering the additivity questions.
Serani et al. [17] formulated the multiplicativity problems
of the purity at the output of Gaussian channels measured by
the Schatten p norm under the assumption that the input sig-
nal states were Gaussian states. In this paper, we extend their
formalism to the additivity problems of minimal output en-
tropy and energy-constrained Holevo capacity and prove the
additivity properties of two classes of Gaussian channels
the classical noise channels and thermal noise channels of ar-
bitrary modes.
The paper is organized as follows. In Sec. II we introduce
the notation and present basic facts about Gaussian states and
the symplectic transformations used in this paper. In Sec. III
we dene Gaussian channels and introduce three gures of
merits to quantify Gaussian channels the maximal output
p norm, the minimal output entropy, and the Holevo capac-
ity. In Sec. IV we formulate the additivity and multiplicativ-
ity problems of Gaussian channels for Gaussian inputs. In
Sec. V we prove a trace formula for symplectic eigenvalues
and a majorization relation on symplectic eigenvalues that is
an immediate consequence of the trace formula. By virtue
of this majorization relation on symplectic eigenvalues, we
prove the additivity and multiplicativity properties of classical
noise channels and thermal noise channels of arbitrary modes
in Sec. VI. Section VII is devoted to concluding remarks.
II. GAUSSIAN STATES
In this section, we introduce the notation and summarize
the basic facts about Gaussian states and symplectic transfor-
mations [18]. We consider an n mode quantum system, such
as a radiation eld. Each mode corresponds to a quantum me-
chanical harmonic oscillator with two canonical degrees of
freedom and the quadratures of each mode correspond to the
position and momentum of the harmonic oscillator. Thus an
n mode state has a 2n canonical degrees of freedom. Let Q
k
and P
k
denote the position and momentum operators as-
sociated with the kth mode (k = 1, 2, . . . , n). These operators
or canonical variables are written in terms of the creation and
2
annihilation operators of the mode;
Q
k
=
_
1
2
k
_
a
k
+ a

k
_
(1)
and
P
k
= i
_

k
2
_
a
k
a

k
_
, (2)
where
k
denotes the energy of the kth mode ( = 1). Since
[a
j
, a
k
] = [a

j
, a

k
] = 0 and [a
j
, a

k
] =
jk
, we have [Q
j
, Q
k
] =
[P
j
, P
k
] = 0 and [Q
j
, P
k
] = i
jk
. Dening
R = (R
1,
R
2
, . . . , R
2n
)
T
= (
1/2
1
Q
1
,
1/2
1
P
1
, . . . ,
1/2
n
Q
n
,
1/2
n
P
n
)
T
, (3)
these canonical commutation relations (CCRs) can be written
as [R
j
, R
k
] = i(J
n
)
jk
. Here, J
n
=
n
j=1
J
1
with
J
1
=
_
0 1
1 0
_
. (4)
In the following, the characteristic function dened as () =
Tr[+()] plays a key role. Here, +() = exp(i
T
J
n
R) is
called the Weyl operators and denotes the density opeartor.
The density operator in turn can be written in terms of its char-
acteristic function and Weyl operators as follows.
=
1
(2)
n
_
d
2n
()+(). (5)
A Gaussian state is dened as a state whose characteristic
function is a Gaussian function:
() = exp[
1
4

T
+ iD
T
]. (6)
Here, > 0 is a real symmetric matrix and D R
2n
. The rst
moment is also called the displacement or mean and given by
m
j
= Tr(R
j
) and the second moment is given by

jk
= 2Tr[(R
j
m
j
)(R
k
m
k
)] i(J
n
)
jk
, (7)
which is called the covariance of canonical variables. The
2n 2n real symmetric matrix (
jk
) is called the covariance
matrix . and D in Eq. (6) are given by = J
T
n
J
n
and
D = J
n
m.
Note that due to our choice of canonical varibles, R
2 j1
=

1/2
j
Q
j
and R
2 j
=
1/2
j
P
j
( j = 1, 2, . . . , n), the trace of the
principal submatrix of the jth mode of ,

[ j]
=
_

2 j1,2 j1

2 j1,2 j

2 j,2 j1

2 j,2 j
_
(8)
gives the energy of the jth mode if m
2 j1
= m
2 j
= 0;
1
4

k
_

2 j1,2 j1
+
2 j,2 j
_
=
k
_
_
a

j
a
j
_
+
1
2
_
. (9)
A density operator is a positive semidenite operator (
0) with Tr = 1. The necessary and sucient condition for
0 in a Gaussian state is given in terms of the covariance
matrix as follows [19].
+ iJ
n
0. (10)
Furthermore, the necessary and sucient condition for a pure
Gaussian state is given by [20]
det = 1. (11)
A linear transformation on canonical variables is written as
R R

= S R. Since the new variables R

also must conserve


the CCR [R

j
, R

k
] = i(J
n
)
jk
, S J
n
S
T
= J
n
must hold. Such an
2n 2n real matrix satisfying S J
n
S
T
= J
n
is called a sym-
plectic transformation, S Sp(2n, R) = S S J
n
S
T
= J
n
,
which forms a group so that S
1
and S
1
S
2
are symplectic if
S, S
1
, S
2
Sp(2n, R). Furthermore, S
T
is also symplectic and
det S = 1 [21] if S Sp(2n, R). In this paper, we repeatedly
use the following Williamson theorem [22]. For a real sym-
metric positive denite 2n 2n matrix A = A
T
> 0, there
exists a symplectic transformation S Sp(2n, R) such that
S AS
T
= diag(
1
,
1
,
2
,
2
, . . . ,
n

n
) (12)
with
j
(> 0) called symplectic eigenvalues of A ( j =
1, 2, . . . , n). Equation (12) is called the Williamson standard
form of A. The symplectic eigenvalues can be computed via
the eigenvalues of J
n
A, which are i
j
( j = 1, 2, . . . , n). Any
symplectic transformation S Sp(2n, R) can be decomposed
into
S = T
(1)
ZT
(2)
(13)
with T
(1)
, T
(2)
Sp(2n, R) O(2n) = K(n) and
Z = diag(z
1
, z
1
1
, . . . , z
n
, z
1
n
), (14)
where z
j
1 ( j = 1, 2, . . . , n) [23]. O(2n) denotes the or-
thogonal group whose elements are 2n 2n real orthogonal
matrices. Equation (13) is called the Euler decomposition
of symplectic transformations. K(n) is a maximal compact
subgroup of Sp(2n, R) and is isomorphic to U(n), the unitary
group whose elements are n n unitary matrices [24]. The
isomorphism is established via the following correspondance:
T
2 j1,2k1
= T
2 j,2k
= Reu
j,k
= u
R
j,k
(15)
and
T
2 j1,2k
= T
2 j,2k1
= Imu
j,k
= u
I
j,k
, (16)
where T K(n) and u
j,k
are ( j, k) components of n n uni-
tary matrices U. Using Eqs. (15) and (16), the isomorphism
K(n) U(n) is easily veried by direct calculations.
Since the covariance matrix of an n-mode state is a 2n 2n
real symmetric positive-denite matrix, it can be cast into the
Williamson standard form. In terms of symplectic eigenval-
ues, condition (10) is rephrased as
j
1 ( j = 1, 2, . . . , n) and
condition (11) is written as
j
= 1 for all j.
3
A canonical linear transformation corresponds to a unitary
transformation in the Hilbert space. Such a unitary transfor-
mation is dened by U
S
+()U

S
= +(S
1
), and the density
operator is transformed as U
S
U

S
= correspondingly.
It is easy to see that U

S
= U
S
1 . The characteristic function of
the newstate is given by Tr[U
S
U

S
+()] = Tr[+(S )] =
(S ). Accordingly, the covariance matrix and the displace-
ment are transformed as S
1
(S
1
)
T
and m S
1
m.
Note that the symplectic eigenvalues are invariant under such
symplectic transformations on the covariance matrix.
Coherent states, squeezed states, and thermal states are typ-
ical Gaussian states, while the number states (of the single
mode) given by k) (k with
k) =
1

k!
(a

)
k
0) , k = 1, 2, . . . , (17)
are not. However, a vacuum state 0) (0, which is a special
case of the number states, is a Gaussian state with the covari-
ance matrix,

vac
= diag(1, 1). (18)
This is the minimal-energy pure state. The coherent state is
the displaced vacuum state so that the covariance matrix is
given by Eq. (18) but has a nite displacement. The thermal
state of the single mode,

th
=
1
1 + (n)

k=0
_
(n)
1 + (n)
_
k
k) (k (19)
has the covariance matrix

th
= diag(2 (n) + 1, 2 (n) + 1) (20)
with (n) being the mean photon number of the mode.
III. GAUSSIAN CHANNELS AND THEIR
QUANTIFICATION
A Gaussian channel is a completely positive trace pre-
serving map that maps Gaussian input states to Gaussian
output states () [25, 26]. The covariance matrix is trans-
formed according to
() = X
T
X + Y, (21)
where X and Y are 2n 2n real matrices and Y is positive
and symmetric (Y = Y
T
0). The complete positivity of the
channel is expressed in terms of these matrices as [27]
Y + iJ
n
iX
T
J
n
X 0. (22)
Hereafter, we write a Gaussain channel by a capital greek
letter and the coresponding transformation on the covariance
matrix by the corresponding lower case greek letter.
There are several gures of merits for quantifying quantum
channels. Here we take three of them; the maximal output p
norm [28], the minimal output entropy [29], and the Holevo
capacity [30] for Gaussian state inputs.
The Gaussian maximal output p norm is dened as

p
() = sup
g
j()j
p
, (23)
where jj
p
= (Tr
p
)
1/p
is the Schatten p norm (p 1) with
A =

A. In Eq. (23), g denotes the set of all Gaussian


states. For a Gaussian state with covariance matrix ,
Tr
p

=
n
_
j=1
2
p
f
p
(
j
)
=
2
pn
F
p
()
, (24)
where
f
p
(x) = (x + 1)
p
(x 1)
p
. (25)
This formula has been originally derived in [31]. Note that
Tr
p

is independent of the displacement m. We can verify that


ln f
p
is increasing and concave (Appendix A), so that F
p
() =

n
j=1
f
p
(
j
) is increasing and Schur-concave (Appendix B). In
Eq. (23), g can be replaced by g
p
, the set of all pure Gaussian
states [17]. In terms of F
p
, we have
_
2
n

p
()
_
p
= inf

p
F
p
_

_
(
p
)
__
. (26)
The Gaussian minimal output entropy is dened as
S
min
() = inf
g
S (()) , (27)
where S () = Tr ln is the von Neumann entropy. Fol-
lowed by the arguments presented in [17], it can be shown
that g in Eq. (27) can be replaced by g
p
. Since
lim
p1+
d
dp
jj
p
= S (), (28)
S
min
() can be computed through
p
(). Note that S
min
() is
also independent of the displacement m. Hereafter, we have
occasions to write the von Neumann entropy as S () instead
of S (

) when we are dealing with Gaussian states.


By denition, the Holevo capacity for Gaussian state inputs
or the Gaussian Holevo capacity is written as [32]
C
G
() = sup
,
(,m)
_
S (())
_
(d, dm)S
_
(
(,m)
)
_
_
, (29)
where
=
_
(d, dm)
(,m)
(30)
is the averaged signal state. In Eq. (29), the supremumis taken
over all possible probability measures and signal states
(,m)
constituting the signal ensemble. Since the states
(,m)
are in-
nite dimensional states, the right-hand side of Eq. (29) be-
comes any large number if we do not impose some constraint
on the signal states. Here, we take the energy constraint,
TrH = c (31)
4
with
H =
n

k=1

k
_
a

k
a
k
+
1
2
_
=
1
2
n

k=1

k
_
R
2
2k1
+ R
2
2k
_
. (32)
Here we recall that the von Neumann entropy of a Gaussian
state depends only on the covariance matrix, and that channel
aects only the covariance matrix. Therefore, if we nd a
single state
(

,m)
that minimizes S (()), all possible Gaus-
sian states with the covariance matrix,

, also minimizes
S (()). This observation indicates that the optimal signal
ensemble that attains the Gaussian Holevo capacity consists
of Gaussian states with the common covariance matrix

and
a certain probability distribution of the displacement m. If we
restrict the signal ensemble to that described above, it suces
to take a Gaussian probability distribution for the probality
measure (d, dm) = (dm). This is shown as follows [33]. If
(dm) is a Gaussian distribution;
(dm) =
1

n
_
det Y

exp(m
T
Y
1

m)dm (33)
with Y

> 0, the averaged input signal state is calculated as


=
_
(dm)
(,m)
=
(+Y

,0)
. (34)
That is, is also a Gaussian state with the covariance matrix
= +Y

and has the vanishing displacement. Equation (34)


even holds for Y

0. Since the displacement of is zero,


TrH =
1
2
n

k=1

k
__
R
2
2k1
_
+
_
R
2
2k
__
=
1
4
n

k=1

k
Tr
[k]
, (35)
where
[k]
denotes the principal submatrix of the kth mode
of dened by Eq. (8) so that the energy constraint [(31)] is
written as
n

k=1

k
Tr( + Y

)
[k]
= 4c. (36)
Since such a signal ensemble described above is not always
optimal, we have
C
G
(, c) sup
,Y

(0)
_
S
_
( + Y

)
_
S (())
_
, (37)
where the supremum is taken under the constraint (36) so that
the right-hand side of Eq. (37) is written as
sup

S ([]) inf

S ([]) = sup

S ([]) S
min
(), (38)
where the supremum is taken under the constraint
n

k=1
Tr
k

[k]
= 4c. (39)
Here, we note the following extremal property of Gaussian
states. For a given covariance matrix , the von Neumann en-
tropy is maximized for the Gaussian state [33, 34]. Therefore,
for the Gaussian signal state
(,m)
and the probability measure
(d, dm), the quantity within the brackets of the right-hand
side of Eq. (29) cannot exceed the value of the right-hand side
of (37). Therefore, the equality holds in the inequality (37);
C
G
(, c) = sup

S (()) S
min
(). (40)
Again, the supremum is taken under the constraint [Eq. (39)].
In the above arguments, we have assumed implicitly that
there exists a Gaussian state with the covariance matrix satis-
fying Eq. (39). Otherwise, C
G
(, c) should be zero because
c in the energy constraint (31) would be smaller than the sum
of the energies of the zero-point oscillations over the modes.
IV. ADDITIVITY AND MULTIPLICATIVITY PROBLEMS
OF GAUSSIAN CHANNELS
For the tensor product of the Gaussian channels, =

m
j=1

j
, it is evident from the denition that

p
()
m
_
j=1

p
(
j
). (41)
If the equality holds in the inequality (41), we say that the
maximal output p norm is multiplicative for Gaussian chan-
nels
j
. To show the multiplicativity of the maximal output p
norm, it suces to show
inf

p
F
p
_

_
(
p
)
__
=
m
_
j=1
inf

p
F
p
_

j
(
p
)
__
. (42)
In Eq. (42),
p
in the left-hand side of the equation is the
covariance matrix of a pure Gaussian state on the composite
Hilbert space 1
1
1
m
, while
p
in the right-hand side of
the equation is the covariance matrix of a pure Gaussian state
on the Hilbert space 1
j
.
By noting Eq. (28), it follows that if the maximal output
p norm is multiplicative, then the minimal output entropy is
additive.
Let c be the value for the energy constraint [Eq. (31)] for
the Gaussian channel
j
and c =
_
m
j=1
c
j
. From the def-
inition, the Gaussian Holevo capacity of the tensor product
channel is greater than or equal to the supremum of the sum
of the Gaussian Holevo capacity of individual channels;
C
G
(, c) sup
c
j
,
_
m
j=1
c
j
=c
m

j=1
C
G
(
j
, c
j
). (43)
Here, the supremumis taken over all possible combinations of
c
j
under the constraint
_
m
j=1
c
j
= c. If the equality holds in
the inequality (43), we say that the energy-constrained Gaus-
sian Holevo capacity is additive for Gaussian channels
j
.
Now let be a Gaussian state on the composite Hilbert space
1
1
1
m
and dene
j
= Tr
1
1
1
j1
1
j+1
1
m
. By
noting the subadditivity of the von Neumann entropy [35],
5
S ()
_
m
j=1
S (
j
), we have
S (())
m

j=1
S
_

j
(
j
)
_
, (44)
where
j
denotes the covariance matrix of the Gaussian state

j
. Therefore, if the minimal output entropy S
min
() is addi-
tive for the channels
j
, then
C
G
(, c) sup
c
j
,
_
m
j=1
c
j
=c
m

j=1
C
G
(
j
, c
j
). (45)
This implies the additivity of the energy-constrained Gaussian
Holevo capacity
C
G
(, c) = sup
c
j
,
_
m
j=1
c
j
=c
m

j=1
C
G
(
j
, c
j
). (46)
Serani et al. [17] proved that the Gaussian maximal output
p norm of a tensor product of identical single mode Gaussian
channels and that of single mode channels described by X
i
and
Y
i
[(21)] such that det X
i
are identical and Y
i
> 0 for all i, are
multiplicative under Gaussian state inputs for p > 1. Con-
sequently, the Gaussian minimal output entropy and energy-
constrained Gaussian Holevo capacity are additive for such
tensor product channels.
V. A MAJORIZATION RELATION ON SYMPLECTIC
EIGENVALUES
Lemma 1. Let A be a 2n 2n real symmetric positive-
denite matrix (A = A
T
> 0). Then
min
S J
n
S
T
=J
k
TrS AS
T
= 2
k

j=1

j
(A), 1 k n. (47)
The minimumin Eq. (47) is taken over all 2k2n real matrices
S satisfying S J
n
S
T
= J
k
.
Proof. First of all, we note that a 2k2n matrix S satisfying
S J
n
S
T
= J
k
is the rst 2k rows of a symplectic transformation,
S
n
Sp(2n, R), and that A can be written in the Williamson
standard form to obtain
min
S J
n
S
T
=J
k
TrS AS
T
= min
S
n
J
n
S
T
n
=J
n
2k

j=1
(S
n
D
A
S
T
n
)
j, j
, (48)
where we have used the fact that a product of symplectic trans-
formations is a symplectic transformation and have dened
D
A
= diag
_

1
(A),

1
(A), . . . ,

n
(A),

n
(A)
_
. Here, we write S
n
in the Euler decomposition form [Eq. (13)] to obtain
2k

j=1
(S
n
D
A
S
T
n
)
j, j
=
k

j=1
n

l,m=1
P
( j)
2l1,2m1
z
l
z
m
Q
2l1,2m1
+
k

j=1
n

l,m=1
P
( j)
2l1,2m
z
l
z
1
m
Q
2l1,2m
+
k

j=1
n

l,m=1
P
( j)
2l,2m1
z
1
l
z
m
Q
2l,2m1
+
k

j=1
n

l,m=1
P
( j)
2l,2m
z
1
l
z
1
m
Q
2l,2m
, (49)
where z
j
1 ( j = 1, 2, . . . , n),
P
( j)
l,m
= T
(1)
2 j1,l
T
(1)
2 j1,m
+ T
(1)
2 j,l
T
(1)
2 j,m
, (50)
and
Q
l,m
=
n

p=1

p
(A)(T
(2)
l,2p1
T
(2)
m,2p1
+ T
(2)
l,2p
T
(2)
m,2p
) (51)
with T
(1)
, T
(2)
K(n). Using Eqs. (15) and (16) for T
(1)
and
T
(2)
, the elements of P
( j)
and Q are computed through the el-
ements of n n unitary matrices U and V as follows:
P
( j)
2l1,2m1
= P
( j)
2l,2m
= u
R
j,l
u
R
j,m
+ u
I
j,l
u
I
j,m
, (52)
P
( j)
2l1,2m
= P
( j)
2l,2m1
= u
R
j,l
u
I
j,m
u
I
j,l
u
R
j,m
, (53)
Q
2l1,2m1
= Q
2l,2m
=
n

p=1

p
(A)
_
v
R
l, p
v
R
m, p
+ v
I
l, p
v
I
m, p
_
, (54)
and
Q
2l1,2m
= Q
2l,2m1
=
n

p=1

p
(A)
_
v
R
l, p
v
I
m, p
+ v
I
l, p
v
R
m, p
_
. (55)
Substituting Eqs. (52), (53), (54), and (55) into Eq. (49) yields
2k

j=1
(S
n
D
A
S
T
n
)
j, j
=
1
4
k

j=1
n

l,m=1
u
j,l
u

j,m
(z
l
+ z
1
l
)(z
m
+ z
1
m
)
n

p=1

p
(A)v
l, p
v

m, p
+
1
4
k

j=1
n

l,m=1
u

j,l
u
j,m
(z
l
+ z
1
l
)(z
m
+ z
1
m
)
n

p=1

p
(A)v

l, p
v
m, p
+
1
4
k

j=1
n

l,m=1
u
j,l
u

j,m
(z
l
z
1
l
)(z
m
z
1
m
)
n

p=1

p
(A)v

l, p
v
m, p
+
1
4
k

j=1
n

l,m=1
u

j,l
u
j,m
(z
l
z
1
l
)(z
m
z
1
m
)
n

p=1

p
(A)v
l, p
v

m, p
=
4

l=1
k

j=1
(W
(l)
D
A
W
(l)
)
j, j
, (56)
6
where * means complex conjugate. In the right-hand side of
Eq. (56), W
(1)
= UZ
+
V, W
(2)
= U

Z
+
V

, W
(3)
= UZ

, and
W
(4)
= U

V, with
Z

=
1
2
diag(z
1
z
1
1
, . . . , z
n
z
1
n
). (57)
Note that matrices W
(l)
D
A
W
(l)
are positive semidenite;
W
(l)
D
A
W
(l)
0 (l = 1, . . . , 4). Now let

j
(W
(l)
D
A
W
(l)
) be
eigenvalues of Hermitian matrices W
(l)
D
A
W
(l)
. By the Schur
theorem (Appendix B), we have
k

j=1
(W
(l)
D
A
W
(l)
)

j, j

k

j=1

j
(W
(l)
D
A
W
(l)
), l = 1, . . . , 4.
(58)
Thus, we obtain
2k

j=1
(S
n
D
A
S
T
n
)
j, j

4

l=1
k

j=1
(W
(l)
D
A
W
(l)
)

j, j

l=1
k

j=1

j
(W
(l)
D
A
W
(l)
). (59)
Here,

j
(W
(1)
D
A
W
(1)
) =

j
(Z
+
VD
A
V

Z
+
) =

j
(Z
+
CZ
+
) (60)
with C = VD
A
V

0. The rst equality is due to the unitary


invariance of the eigenvalues of the Hermitian matrices. The
eigenvalues,

j
(Z
+
CZ
+
), admit the following max-min repre-
sentation [36]:

j
(Z
+
CZ
+
) = max
w
1
,...,w
j1
C
n
min
x0
xw
1
,...,w
j1
x

Z
+
CZ
+
x
jxj
2
2
. (61)
If we write y = Z
+
x,
jxj
2
2
=
n

j=1
(Z
1
+
)
2
j

y
j

j=1

y
j

2
= jyj
2
2
. (62)
Hence,

j
(Z
+
CZ
+
) max
w
1
,...,w
j1
C
n
min
y0
yZ
1
+
w
1
,...,Z
1
+
w
j1
y

Cy
jyj
2
2
=

j
(C) =

j
(A) (63)
so that

j
(W
(1)
D
A
W
(1)
)

j
(A). Similarly,

j
(W
(2)
D
A
W
(2)
)

j
(A). Since

j
(W
(3)
D
A
W
(3)
) 0
and

j
(W
(4)
D
A
W
(4)
) 0, we nd
2k

j=1
(S
n
D
A
S
T
n
)
j, j
2
k

j=1

j
(A). (64)
The equality holds for S
n
= I
2n
Sp(2n, R). This completes
the proof.
Theorem1. Let A and B be 2n2n real positive symmetric
matrices (A = A
T
> 0, B = B
T
> 0). Then
(A + B)
w
(A) + (B). (65)
Proof. By Lemma 1, we have
2
k

j=1

j
(A + B) = min
S J
n
S
T
=J
k
TrS (A + B)S
T
min
S J
n
S
T
=J
k
TrS AS
T
+ min
S J
n
S
T
=J
k
TrS BS
T
= 2
k

j=1

j
(A) + 2
k

j=1

j
(B)
= 2
k

j=1
[(A) + (B)]

j
. (66)
By denition of the weak supermajorization [(B2)], inequality
(66) yields the desired relation (65).
VI. ADDITIVITY AND MULTIPLICATIVITY
PROPERTIES OF GAUSSIAN CHANNELS
In this section, we focus on two classes of Gaussian chan-
nels; the classical noise channel and the thermal noise chan-
nel. Both are important cases of Gaussian channels.
In the classical noise channel, a classical Gaussian noise is
added to the input states. Since +()R
j
+

() = R
j
+
j
, the
classical noise channel is described by

cl
(
(,m)
) =
1

det Y
_
d
2n
exp(
T
Y
1
)+()
(,m)
+

()
=
(+Y,m)
(67)
with Y 0. Namely, the transformations of covariance matrix
is given by

cl
() = + Y. (68)
In the thermal noise channel, the signal Gaussian states
interact with an environment that is in thermal equilibrium.
This channel is modeled by bemsplitters that couple the in-
put Gaussian state and the thermal reservoir. Let a
j
and b
j
be
annihilation operators of the jth mode of the singnal state
and the thermal state
th
that acts as a thermal reservoir. The
action of the bemsplitter is described by the transformations,
a
j
cos
j
a
j
+ sin
j
b
j
and b
j
sin
J
a
j
+ cos
j
b
j
. Ac-
cordingly, the corresponding symplectic transformation takes
the form,
S
j
=
_
cos
j
I
2
sin
j
I
2
sin
j
I
2
cos
j
I
2
_
. (69)
Therefore, the output Gaussian state has the covariance ma-
trix,

th
() = Tr
th
[S
1
(
th
)(S
1
)
T
], (70)
7
where S = S
1
S
2
S
n
,
th
=

n
j=1
(2 (n)
j
+ 1)I
2
denotes the covariance matrix of the thermal state
th
with
(n)
j
being the averaged photon number of the jth mode, and
Tr
th
describes the trace over the thermal state. Using Eq. (69),
the right-hand side of Eq. (70) is calculated as

th
() = X
T
X + Y (71)
where
X =
n

j=1

j
I
2
(72)
and
Y =
n

j=1
(2 (n)
j
+ 1)(1
j
)I
2
(73)
with
j
= cos
2

j
being the transmittivity of the bemsplitter.
At zero temperature ((n)
j
= 0), the thermal noise channel is
reduced to the lossy or attenuation channel [25, 26].
A. Classical noise channels
For the l
j
-mode classical noise channel
cl
j
, the covariance
matrix is transformed according to
cl
j
() = + Y
j
, with
Y
j
0. The tensor product of
cl
j
is also a classical noise
channel, and the covariance matrix of the output is given by

cl
() =
m

j=1

cl
j
() = + Y, (74)
where is the covariance matrix of the input Gaussian state
and Y =

m
j=1
Y
j
. Since Y
j
is not always strictly positive
denite, we add I
2l
j
to Y
j
( > 0); Y
j
() = Y
j
+ I
2l
j
> 0
( j = 1, . . . , m) so that we can apply the Williamson theorem
to Y
j
(). Here, we write Y() =

m
j=1
Y
j
(). By Williamson
theorem, there exists S
j
Sp(2l
j
, R) such that
S
j
Y()
j
S
T
j
= diag
_
y
( j)
1
(), y
( j)
1
(), . . . , y
( j)
l
j
(), y
( j)
l
j
()
_
= D
Y
j
(). (75)
Here, we write S =

m
j=1
S
j
so that S Y()S
T
=

m
j=1
D
Y
j
() = D
Y
(). By Theorem 1, we have

_
+ Y()
_
=
_
S S
T
+ D
Y
()
_

w
() + (D
Y
()) . (76)
Since F
p
is increasing and Schur-concave, (76) yields
F
p
_
( + Y())
_
F
p
(() + (D
Y
())) . (77)
Here we can take the limit 0 to obtain
inf

p
F
p
_
(
cl
())
_
inf

p
F
p
_
(
p
) + (Y)
_
, (78)
where (Y) =

m
j=1
(Y
j
) with
(Y
j
) = lim
0
(y
( j)
1
(), y
( j)
2
(), . . . , y
( j)
l
j
())
= (y
( j)
1
, y
( j)
2
, . . . , y
( j)
l
j
). (79)
The inmum in the right-hand side of (78) is achieved for
(
p
) = (1, 1, . . . , 1, 1) and the equality holds if S S
T
takes
the Williamson standard form. Namely, for the covariance
matrix
p
such that S
p
S
T
= diag(1, 1, . . . , 1, 1),
inf

p
F
p
_

cl
(
p
)
__
=
m
_
j=1
l
j
_
k=1
f
p
(1 + y
( j)
k
)
=
m
_
j=1
inf

p
F
p
_

cl
j
(
p
)
__
. (80)
That is, the maximal output p norm is multiplicative. Conse-
quently, the Gaussian minimal output entropy and the Gaus-
sian Holevo capacity are additive. Note that S dened above
is the direct sum of local symplectic transformations and
diag(1, 1, . . . , 1, 1) is the covariance matrix of the pure separa-
ble state so that the optimal
p
= S
1
diag(1, 1, . . . , 1, 1)(S
T
)
1
is a separable pure state. This obvervation also indicates the
multiplicativity of the maximal output p norm.
B. Thermal noise channels
For the l
j
-mode thermal noise channel
th
j
, the covariance
is transformed according to

th
j
() =
(0)
j
() + Y
j
, (81)
where
Y
j
=
l
j

k=1
(2 (n)
k
+ 1)(1
k
)I
2
(82)
and

(0)
j
() = X
T
X + Y, (83)
with X =

l
j
k=1

k
I
2
, Y =

l
j
k=1
(1
k
)I
2
and 0
j

1. The tensor product of
th
j
is a Gaussian channel and the
covariance matrix of the output state is given by

th
() =
m

j=1

th
j
() =
(0)
() + Y, (84)
where
(0)
() =

m
j=1

(0)
j
() and Y =

m
j=1
Y
j
. Again, we
add I
2n
to Y ( > 0); Y() = Y + I
2n
to ensure Y() > 0
(n =
_
m
j=1
l
j
). Accordingly, we write
th
(, ) =
(0)
()+Y().
By Theorem 1, we have

th
(, )
_

w

(0)
()
_
+ (Y()) . (85)
8
Since F
p
is increasing and Schur-concave, (85) yields
F
p

th
(, )
_
F
p
_

(0)
()
_
+ (Y())
_
, (86)
Here we can take the limit 0 to obtain
inf

p
F
p
_

th
(
p
)
__
inf

p
F
p
_

(0)
(
p
)
_
+ (Y)
_
. (87)
Since the channel
(0)
() is completely positive, the Gaus-
sian state with the covariance matrix
(0)
(
p
) is a physical
state so that
j
_

(0)
(
p
)
_
1 ( j = 1, 2, . . . , n). For =
diag(1, 1, . . . , 1, 1),
j
_

(0)
()
_
= 1 and the equality holds in
(87). Hence,
inf

p
F
p
_

th
(
p
)
__
=
m
_
j=1
l
j
_
k=1
f
p
_
1 + 2(1
k
) (n)
k
_
=
m
_
j=1
inf

p
F
p
_

th
j
(
p
)
__
. (88)
That is, the maximal output p norm is multiplicative. Conse-
quently, the Gaussian minimal output entropy and the energy-
constrained Gaussian Holevo capacity are additive.
VII. CONCLUDING REMARKS
We proved the multiplicativity of maximal output p norm
of classical noise channels and that of thermal noise channels
of arbitrary modes for all p > 1 under the assumption that
the input signal states were Gaussian states. As a direct con-
sequence, we also proved the additivity of the minimal out-
put entropy and the energy-constrained Holevo capacity for
those Gaussian channels under Gaussian inputs. A majoriza-
tion relation on symplectic eigenvalues was of importance in
the proof.
At present, very little is known about the inequalities related
to symplectic eigenvalues of real positive-denite matrices.
Eorts to unveil such unknown relations would assist in the
analysis of entropic quantities of Gaussian states and would
also shed light on the properties of Gaussian state entangle-
ment [37] and secure communication via Gaussian channels
[38].
Acknowledgments
The author would like to thank Masahito Hayashi, Osamu
Hirota, Masaki Sohma, and Xiang-Bin Wang for useful com-
ments and discussions. He is grateful to Hiroshi Imai for sup-
port.
APPENDIX A: CONCAVITY OF ln f
p
(x)
It is readily seen that f
p
(x) is concave for 1 p 2 and is
convex for p 2. Therefore, ln f
p
(x) is concave for 1 p
2. In order to show the concavity of ln f
p
(x) for p 2, we
examine the second derivative of ln f
p
(x);
d
2
dx
2
ln f
p
(x) =
p
f
2
p
(x)
g
p
(x), (A1)
where g
p
(x) = 4p(x
2
1)
p2
+ f
p
(x) f
p2
(x). For p 2, we nd
that g
p
(x) 0 so that d
2
ln f
p
(x)/dx
2
0. That is, ln f
p
(x) is
concave for p 2. Thus, ln f
p
(x) is concave for all p 1.
APPENDIX B: MAJORIZATION AND SCHUR CONVEXITY
In this Appendix, we present denitions and basic facts on
majorization and the Schur convexity (concavity) used in this
paper [39].
For vectors, x = (x
1
, x
2
, . . . , x
n
) and y = (y
1
, y
2
, . . . , y
n
)
(x
j
, y
j
R), we write x y if x
j
y
j
( j = 1, 2, . . . , n).
Let x

= (x

1
, x

2
, . . . , x

n
) denote the decreasing rearrange-
ment of x, where x

1
x

2
x

n
. Similarly, let
x

= (x

1
, x

2
, . . . , x

n
) denote the increasing rearrangement of
x, where x

1
x

2
x

n
.
We say that x is majorized by y and write x y if
k

j=1
x

j

k

j=1
y

j
, k = 1, 2, . . . , n (B1)
with the equality for k = n.
We say that x is weakly submajorized (weakly superma-
jorized) by y and write x
w
(
w
)y if
k

j=1
x
()
j
()
k

j=1
y
()
j
, k = 1, 2, . . . , n. (B2)
It is easy to see that x y if and only if x
w
y and x
w
y.
A real-valued function f dened on R
n
is said to be increas-
ing if x y f (x) f (y), while f is said to be decreasing if
f is increasing.
A real-valued function f dened on R
n
is said to be Schur-
convex if x y f (x) f (y) while f is said to be Schur-
concave if f is Schur-convex.
A real-valued function f dened on R
n
satised x
w
(
w
)y f (x) () f (y) if and only if f is increasing and Schur-
convex (concave).
Let g be a continuous and nonnegative function on R. Then,
f (x) =

n
j=1
g(x
j
) is Schur-convex (concave) if and only if
log g is convex (concave).
An application of majorization theory to matrix analysis is
the following Schur theorem [40]. Let A M
n
(C) be an Her-
mitian matrix. Let diag(A) denote the vector whose elements
are the diagonal entries of A and (A) the vector whose coor-
dinates are eigenvalues of A. Then, diag(A) (A).
9
[1] M. A. Nielsen and I. L. Chuang, Quantum Computation
and Quantum Information (Cambridge University Press, Cam-
bridge, United Kingdom, 2000).
[2] M. Hayashi, An Introduction to Quantum Information Theory
(Springer-Verlag, Berlin, 2006).
[3] A. S. Holevo, IEEE Trans. Inf. Theory 44, 269 (1998).
[4] B. Schumacher and M. D. Westmoreland, Phys. Rev. A 56, 131
(1997).
[5] P. W. Shor, J. Math. Phys. 43, 4334 (2002).
[6] C. King, J. Math. Phys. 43, 4641 (2002).
[7] C. King, IEEE Trans. Inf. Theory 49, 221 (2003).
[8] K. Matsumoto and F. Yura, J. Phys. A 37, L167 (2004).
[9] K. Matsumoto, T. Shimono, and A. Winter, Commun. Math.
Phys. 246, 427 (2004).
[10] K. M. R. Audenaert and S. L. Braunstein, Commun. Math.
Phys. 246, 443 (2004).
[11] P. W. Shor, Commun. Math. Phys. 246, 453 (2004).
[12] A. A. Pomeransky, Phys. Rev. A 68, 032317 (2003).
[13] V. Giovannetti and S. Lloyd, Phys. Rev. A 69, 062307 (2004).
[14] V. Giovannetti, S. Lloyd, L. Maccone, J. H. Shapiro, and B. J.
Yen, Phys. Rev. A 70, 022328 (2004).
[15] V. Giovannetti, S. Guha, S. Lloyd, L. Maccone, and J. H.
Shapiro, Phys. Rev. A 70, 032315 (2004).
[16] V. Giovannetti, S. Guha, S. Lloyd, L. Maccone, J. H. Shapiro,
and H. P. Yuen, Phys. Rev. Lett. 92, 027902 (2004).
[17] A. Serani, J. Eisert, and M. M. Wolf, Phys. Rev. A 71, 012320
(2005).
[18] J. Eisert and M. Plenio, Int. J. Quant. Inf. 1, 479 (2003).
[19] D. Petz, An Invitation to the Algebra of Canonical Commutation
Relations (Leuven University Press, Leuven, 1990).
[20] J. Manuceau and A. Verbeure, Commun. Math. Phys. 9, 293
(1968).
[21] R. Simon, E.C.G. Sudarshan, and N. Mukunda, Phys. Rev. A
36, 3868 (1987).
[22] J. Williamson, Am. J. Math. 58, 141 (1936); R. Simon, S.
Chaturvedi, and V. Srinivasan, J. Math. Phys. 40, 3632 (1999).
[23] Arvind, B. Dutta, N. Mukunda, and R. Simon, Pramana 45, 471
(1995); e-print quant-ph/9509002.
[24] R. Simon, N. Mukunda, and B. Dutta, Phys. Rev. A 49 1567
(1994).
[25] A. S. Holevo and R. F. Werner, Phys. Rev. A63, 032312 (2001).
[26] J. Eisert and M. M. Wolf, e-print quant-ph/0505151.
[27] G. Lindblad, J. Phys. A 33, 5059 (2000).
[28] G. G. Amosov, A. S. Holevo, and R. F. Werner, Problems of Inf.
Trans. 36, 305 (2000).
[29] C. King and M. B. Ruskai, IEEE Trans. Inform. Theory 47, 192
(2001).
[30] A. S. Holevo, Problems of Inf. Trans. 5, 247 (1979).
[31] A. S. Holevo, M. Sohma, and O. Hirota, Rep. Math. Phys. 46,
343 (2000).
[32] A. S. Holevo, Russian Math. Surveys 53, 1295 (1998).
[33] A. S. Holevo, M. Sohma, and O. Hirota, Phys. Rev. A 59, 1820
(1999).
[34] M. M. Wolf, G. Giedke, and J. I. Cirac, e-print
quant-ph/0509154.
[35] M. Ohya and D. Petz, Quantum Entropy and Its Use (Springer-
Vrlag, New York, 1993).
[36] R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge
University Press, Cambridge, United Kingdom, 1985).
[37] M. M. Wolf, G. Giedke, O. Kr uger, R. F. Werner, and J. I. Cirac,
Phys. Rev. A 69, 052320 (2004).
[38] M. Navascu es, J. Bae, J. I. Cirac, M. Lewenstein, A. Sanpera,
and A. Ac

in, Phys. Rev. Lett. 94, 010502 (2005); M. Navascu es


and A. Ac

in, Phys. Rev. A 72, 012303 (2005).


[39] A. Marshall and I. Olkin, Inequalities: Theory of Majorization
and Its Applications (Academic Press, San Diego, 1979).
[40] R. Bhatia, Matrix Analysis (Springer-Verlag, New York, 1989).

You might also like