Sparse Systems With Local Multiplicity A Dickenstein
Sparse Systems With Local Multiplicity A Dickenstein
Sparse Systems With Local Multiplicity A Dickenstein
1. Introduction
Consider a univariate Laurent polynomial f with support in a finite exponent
set A ⊂ Z and let p ∈ C∗ . It is easy to see that the multiplicity of q as a root of f is
strictly smaller than the cardinality N of A, independently of the degree d of f . Up
to multiplying f by a monomial, we can assume that A ⊂ {0, 1, . . . , d} and that it
contains 0. The polynomial (x − q)d has support in {0, 1, . . . , d} with d + 1 nonzero
coefficients and it has multiplicity d = (d + 1) − 1 at q. As long as N < d + 1,
the multiplicity of q as a root of f is necessarily smaller than d. The upper bound
N − 1 was already given by G. Hájos [15] and a simple proof by induction on d can
be found in [12, Lemma 3.9].
In the multivariate case, we fix a finite set A = {a0 , . . . , aN −1 } ⊂ Zn of car-
dinality N . We assume that the convex hull of A has full dimension n and we
call
(1) m = N − (n + 1)
the codimension of A. To avoid trivial cases, we will assume that N ≥ n + 2, that
is, that the codimension m is at least 1.
Consider a system of n Laurent polynomials with complex coefficients supported
on A:
N
X −1
(2) fi (x) = cij xaj = 0, i = 1, . . . , n.
j=0
Here, x = (x1 , . . . , xn ) and we will always assume that the coefficient matrix
C = (cij ) ∈ Cn×N
(3) C · xA = 0,
For any n and m we present the following explicit system given by the n-variate
polynomials
(5) fm+1,n (x) = . . . = fm+n,n (x) = 0,
defined as follows for any d ≥ 0:
d
X d k k2 n
(6) fd,n (x) = (−1)k x1 x2 · · · xkn .
k
k=0
All the supports are contained in the support of fm+n,n , which has codimension m.
Theorem 7.1 in Section 7 shows that the intersection multiplicity of system (5) at
the isolated root 1 = (1, . . . , 1) ∈ (C∗ )n is at least
n+m N −1
= .
n n
Question 1.1. Is there a relation between the upper bound for the (global) number
of positive real solutions of a real sparse polynomial systems and the (local) number
of intersection multiplicity at an isolated solution in the complex torus of a complex
sparse polynomial system with the same support?
Question 1.2. Does there exist a small perturbation of the coefficients in (6) giving
a system with real coefficients and with (at least) m+n
n positive solutions?
We answer positively Question 1.2 in the codimension 1 case in Section 8.
Moreover, we pose the following question/conjecture about the symmetry be-
tween dimension and codimension. A similar question concerning the number of
positive roots has been raised and partially studied in [3].
Question 1.3. Call M (n, m) the highest possible multiplicity at a multiple point in
the torus of a Laurent polynomial system as in (2) in n variables and codimension
m. Does it hold that
(7) M (n, m) = M (m, n)?
We expect that the answer to Question 1.3 is positive. One obstacle we need to
overcome to prove affirmatively Question 1.3 is summarized in the following simple
example. Let A = {0, 1, 2, 3} and f = (x−1)3 . Then 1 is a non-degenerate multiple
root of f of multiplicity 3 but it is not true that for any choice of matrices B and D
reduced such that the associated Gale dual system is convenient, it holds that (0, 0)
is a non-degenerate solution of the Gale dual system. This is similar to questions
arising from the computation of the Milnor number of an isolated singularity [18].
We then pose our next question. We refer the reader to Subsection 5.1 for the
notation.
Question 1.4. For any choice of matrices A, C and Gale dual matrices B, D as
above, do there always exist matrices à and C̃ left equivalent to A and C and
matrices B̃ and D̃ right equivalent to B and D such that the systems C̃ ·(x+1)Ã = 0
and the Gale system (D̃ · y)B̃ = 1 are convenient and non-degenerate at the origin?
A positive answer to Question 1.4 would imply a positive answer to Question 1.3.
Moreover, it would imply that for any solution of system (2) the intersection mul-
tiplicity is bounded by (m + n)min(m,n) .
We end this introduction by explaining the rationale behind our definition of the
system fm+1 (x) = · · · = fn+m (x) = 0 in Theorem 7.1. The first basic observation
is the following. We recall that the multiplicity of a point q of a hypersurface f = 0
(or the multiplcity of f at q) is defined as 0 if f (q) ̸= 0 and otherwise, as the
maximal k ≥ 1 such f and all its partial derivatives up to order k − 1 vanish at q.
PN −1
Given A = {a0 , . . . , aN −1 } ⊂ Zn and a polynomial f = j=0 ci xaj with support
∂
A, it is straightforward to check that f (x) = ∂x 1
(f )(x) = · · · = ∂x∂n (f )(x) = 0 at
1 = (1, . . . , 1) (that is, the multiplicity of f at 1 is at least 2) if and only if the
vector c = (c0 , . . . , cN −1 ) lies in the kernel of the following (n + 1) × N integer
matrix (also denoted by A):
1 ... 1
(8) A= ,
a0 . . . aN −1
with columns (1, ai ) ∈ Zn+1 . By our hypothesis about the point configuration of
exponents, the matrix A has maximal rank and thus we have that the codimension
SPARSE SYSTEMS WITH HIGH LOCAL MULTIPLICITY 5
B∆ := Rn≥0 \ (∆ + Rn≥0 )
is bounded, where the sign plus stands for Minkowski sum. A polytope in P is
said to be convenient. Equivalently, a convex polytope is convenient if and only if
it intersects every coordinate axis. The mixed covolume is the unique symmetric
multilinear function Vol◦ : P n → R≥0 such that Vol◦ (∆, ∆, . . . , ∆) is the usual Eu-
clidean volume of B∆ multiplied by n!. Note that if ∆1 , . . . , ∆n ∈ P are polytopes
with vertices in Zn , then Vol◦ (∆1 , . . . , ∆n ) is an integer number.
Recall that the usual mixed volume Vol of n convex sets in Rn is the unique
symmetric multilinear function Vol such that Vol(∆, ∆, . . . , ∆) is the usual Eu-
clidean volume of ∆ multiplied by n!, for any convex set ∆. We have the following
covolume formula for polytopes ∆1 , . . . , ∆n ∈ P, which is similar to a well-known
formula for the usual mixed volume:
X
(9) Vol◦ (∆1 , ∆2 , . . . , ∆n ) = (−1)|I| Vol(B∆I ),
I⊂[n],I̸=∅
P
where Vol stands for the Euclidean volume and ∆I = i∈I ∆i . Note that if
∆1 , . . . , ∆n ∈ P then ∆I ∈ P for any I ⊂ [n].
We now introduce some important definitions.
Definition 2.1. Consider a polynomial f ∈ C[x1 , . . . , xn ].
• The Newton polytope ∆(f ) of f is the convex-hull of the set Supp(f ) of all
the monomials occurring in f with nonzero coefficient.
• The Newton diagram D(f ) is the union of the bounded faces of the convex-
hull of the union of the sets α + Rn≥0 for α ∈ Supp(f ).
• A polynomial f and its Newton diagram are called convenient if ∆(f ) is
convenient. A system of polynomials is convenient if each polynomial is
convenient.
Thus, a poynomial f is convenient if and only its Newton diagram D(f ) intersects
each coordinate axis, equivalently, if ∆(f ) contains a point in each coordinate axis.
Definition 2.2. Consider a polynomial system G1 = · · · = Gn = 0 in n variables
having an isolated solution at the origin. For any vector ϵ ∈ Rn>0 and any subset K
of {1, . . . , n}, we denote by (Gi |K )ϵ the subsum of terms in Gi only depending on
the variables xj with j ∈ K such that the inner product ⟨ϵ, aℓ ⟩ is minimum among
such monomials. The system is called non-degenerate at the origin if G1 , . . . , Gn are
convenient and none of the initial systems (Gi |K )ϵ = 0, i = 1, . . . , n, have a common
solution in the corresponding torus. In general, given a system F1 = · · · = Fm = 0
with an isolated solution at a point q = (q1 , . . . , qn ), the system is called non-
degenerate at q if the system Gi (x) = Fi (x1 + q1 , . . . , xm + qm ) = 0, i = 1, . . . , m,
has a non-degenerate solution at the origin.
The non-existence of a common solution of the systems (Gi |K )ϵ = 0, i = 1, . . . , n
in Definition 2.2 can be ensured with the non-vanishing of appropriate face resul-
tants [14].
We state two basic known results.
SPARSE SYSTEMS WITH HIGH LOCAL MULTIPLICITY 7
1 2 3
Theorem 2.3. [19, Theorem IX.8 and Proposition IX.29], [8, Theorem 1], [11, 20]
Let f1 , . . . , fn ∈ C[x1 , . . . , xn ] be polynomials which vanish at the origin. Denote by
∆i the Newton polytope of fi . If f1 , . . . , fn are convenient, then
mult0 (f1 , . . . , fn ) ≥ Vol◦ (∆1 , ∆2 , . . . , ∆n ).
Moreover, if the polynomial system f1 = · · · = fn = 0 is non-degenerate at the
origin, then
mult0 (f1 , . . . , fn ) = Vol◦ (∆1 , ∆2 , . . . , ∆n ).
The definitions of Newton diagram, the origin being a non-degenerate isolated
solution and the local multiplicity at the origin can be extended to power series
(centered at the origin). Theorem 2.3 also holds in this case.
Proposition 2.4. (Decreasing property)[19, Example IX.3] Consider polytopes
Pi , Qi ∈ P, i = 1, . . . , n, such that Pi ⊂ Qi for all i = 1, . . . , n. Then
Vol◦ (P1 , P2 , . . . , Pn ) ≥ Vol◦ (Q1 , Q2 , . . . , Qn ).
It follows from Definition 2.1 that the Newton diagram D(F ) of a polynomial
F ∈ C[x1 , . . . , xn ] is the union of the bounded faces of the convex-hull of the
union of the sets β + Rn≥0 for β ∈ Zn≥0 such that ∂ α (F )(0) = 0 for any α < β
and ∂ β (F )(0) ̸= 0. Thus, if β ∈ Zn≥0 lies in the Newton diagram D(F ) then
∂ α (F )(0) = 0 for any α < β. Moreover, if β is a vertex of D(F ) then ∂ β (F )(0) ̸= 0.
Some interesting consequences are proved in Corollaries 3.13, 3.16 and 3.17, and in
Proposition 3.14.
We will use the setting and the notations in the Introduction. Consider a system
f1 = · · · = fn = 0 as in (2) of n polynomial equations and n variables, with
exponents in a set
A = {a0 , a1 , . . . , an+m } ⊂ Zn
consisting of n + m + 1 points and not contained in a hyperplane. We also denote
this system as C · xA = 0, where C = (cij ) ∈ Cn×(n+m+1) is the coefficient matrix
of the system of rank n. Our condition that the convex hull of the exponents
ai , i = 0, . . . , n + m, has full dimension n means that this matrix A has maximal
rank n+1. As we mentioned, our aim is to bound the local multiplicity of an isolated
solution q ∈ (C∗ )n of the system. As we pointed out, it is enough to consider the
case q = 1, or equivalently to assume that 0 is an isolated solution of the system
C · (x + 1)A = 0, that we also denote in (4) by Fk (x) = fk (x + 1), k = 1, . . . , n.
3.1. Systems with integer coefficients. We next prove that the maximal mul-
tiplicity at the non-degenerate common solution 1 of a convenient sparse system
can be achived with a system with integer coefficients.
Theorem 3.1. Let C ∈ Cn×(n+m+1) and A ∈ Zn+1×(n+m+1) such that the system
C · (x + 1)A = 0 is convenient and non-degenerate at 0 with local intersection
multiplicity µ. Then there exists an integer matrix C 0 ∈ Zn×(n+m+1) such that
the system C 0 · (x + 1)A = 0 is convenient and non-degenerate at 0 of the same
multiplicity µ.
Proof. There exists a unique a ∈ Zn such that the translate a + A lies in the non-
negative orthant and intersects all the coordinate axes. Adding a to the support
amounts to multiplying the polynomials fk , k = 1, . . . , n, by the monomial z a . So,
we assume without loss of generality that A lies in the non-negative orthant and
intersects all the coordinate axes. Moreover, we can assume that for any ai in A
there is an index k such that cki ̸= 0, since otherwise we could consider a smaller
support (with smaller codimension). Then, adding multiples of other rows of C, we
can assume that all coefficients are nonzero.
Given an index k = 1, . . . , n, denote Fk (z) = fk (z + 1) as in (4). By Lemma 3.3,
a monomial xγ is a vertex of the Newton diagram D(Fk ) if ∂ α (Fk )(0) = 0 for any
α ≤ γ, α ̸= γ and ∂ γ (Fk )(0) ̸= 0. To compute these derivatives, note that
m+n m+n n Xaij
X X Y aij ℓi
Fk (z) = ckj (z + 1)aj = ckj z ,
j=0 j=0 i=1
ℓi i
ℓi =0
and so,
n+m
X X Y aij α
(10) Fk (z) = ckj z .
α
αi
j=0 0≤αi ≤aij
We could extend the definition of the combinatorial numbers aαiji = 0 if αi > aij
z α in (10). As A′k is an integer matrix, we can find nonzero vectors c0k in its kernel
with integer coefficients, for any k = 1, . . . , n.
We need to ensure that we can choose an integer matrix C 0 such that the associ-
ated system C 0 · (z + 1)A = 0 is convenient has a non-degenerate root at the origin.
For this, observe that the coefficient vectors have to satisfy a finite number of open
conditions. On one side, since the system C · (x + 1)A = 0 is convenient, we have
that for each ℓ = 1, . . . , n, there exists a nonzero vector γ iℓ = (0, . . . , 0, γiℓ ℓ , 0, . . . , 0)
which is a vertex of D(Fk ). It follows that the linear form in the coefficients (ckj )
corresponding to α = γ iℓ in (10) is nonzero. To get a convenient polynomial, we
then need to pick the integer coefficients in the open set where these linear forms
corresponding to all the coordinates ℓ = 1, . . . , n do not vanish. Moreover, the
system is nondegenerate when a finite set of face resultants are nonzero at the co-
efficients of the system. These face resultants are not identically nonzero because
they do not vanish on the coefficients of the original polynomials ck . As it cannot
happen that a nonzero polynomial vanishes on all the integer points of a dense open
set of a positive dimensional complex subspace where it is not identically zero, we
can choose integer vectors of coefficients (c01 , . . . , c0n ) such that the hypothesis of
non-degeneracy is also satisfied. Thus, the associated polynomials have the same
Newton diagrams D(F1 ), . . . , D(Fn ) and the same intersection multiplicity µ at the
origin. □
We have the following immediate corollary of Theorem 3.1.
Corollary 3.2. Given a finite lattice configuration A ⊂ Zn with full dimensional
convex hull, the maximal possible multiplicity of an isolated singularity at 1 of a
system f1 = · · · = fn = 0 with exponents in A as in (5) for which the system
f1 (x + 1) = · · · = fn (x + 1) = 0 is convenient and non-degenerate at the origin, is
attained by polynomials with integer coefficients.
3.2. Explicit bounds. We next prove our main Theorem 3.11 with an upper
bound on the local multiplicity in terms of n and m. As in the proof of Theo-
rem 3.1 we will assume without loss of generality that A lies in the non-negative
orthant and intersects all the coordinate axes. We denote the toric derivatives
xj ∂j = θj , j = 1, . . . , n, and for β ∈ Zn≥0 , we write θβ = θ1β1 . . . θnβn . We use the
partial order on Zn≥0 given by α ≤ β if αi ≤ βi for i = 1, . . . , n. We will write α < β
when α ≤ β and α ̸= β. For instance, we have (1, 2) < (2, 2). We will denote by
e1 , . . . , en the standard basis of Rn .
Lemma 3.3. Let f ∈ C[x1 , . . . , xn ] and q ∈ Cn . Let β ∈ Zn≥0 and set F in
C[x1 , . . . , xn ] as F (x) = f (x1 + q1 , . . . , xn + qn ) = F (x + q). Let β ∈ Zn≥0 . Then
∂ α (F )(0) = 0 for any α ∈ Zn≥0 such that α < β if and only θα (f )(q) = 0 for any
α ∈ Zn≥0 such that α < β. In this case
(11) θα (f )(q) = qβ ∂ β (F )(0).
In particular, if q = 1 we have the equality θβ (f )(1) = ∂ β (F )(0).
The proof of Lemma 3.3 is a direct consequence of the following well known
result.
Lemma 3.4. Let P1 and P2 be infinitely many times differentiable maps defined in
an open neighborhood of a point a ∈ Cn . Assume that P1 (a) ̸= 0 and let F = P1 P2 .
Let β ∈ Zn≥0 . Then (∂ α F )(a) = 0 for any α ∈ Zn≥0 such that α < β if and only if
10 FRÉDÉRIC BIHAN, ALICIA DICKENSTEIN, AND JENS FORSGÅRD
(∂ α P2 )(a) = 0 for any α ∈ Zn≥0 such that α < β. Moreover, if (∂ α F )(a) = 0 for
any α such that α < β, then (∂ β F )(a) = P1 (a) · (∂ β P2 )(a).
The proof of Lemma 3.4 follows from the fact that by Leibniz rule ∂ α F is a
linear combination of products ∂ α1 P1 · ∂ α2 P2 over all (α1 , α2 ) ∈ (Zn≥0 )2 such that
α1 + α2 = α.
We introduce a useful notation.
Definition 3.5. Given matrices C, A as before and α ∈ Zn≥0 we define
m+n
X
(12) Lk (α) = ckj aα
j.
j=0
We also set
(14) ρk,ℓ = |{u ∈ {aℓ0 , . . . , aℓ m+n } , c̄k,ℓ,u ̸= 0}| − 1,
Assume for simplicity that aℓ,0 , . . . , aℓ,tℓ are distinct so that {aℓ0 , aℓ1 , . . . , aℓ,m+n } =
{aℓ0 , aℓ1 , . . . , aℓ,tℓ }. Write c̄s for c̄k,ℓ,aℓs , s = 0, . . . , tℓ . For any positive integer L
consider the Vandermonde matrix
VL = aλℓs 0≤λ≤L,0≤s≤t .
(18)
ℓ
Since A has maximal rank, at least one coefficient aℓs is nonzero. Therefore (17)
is satisfied if and only if the column vector (c̄s )s=0,...,tℓ belongs to the kernel of VL
for any positive integer L, equivalently, (c̄s )s=0,...,tℓ is the zero vector.
Assume now that β = βk,ℓ · eℓ ∈ D(Fk ). Then β is a vertex of D(Fk ). Thus,
by Lemma 3.6 we have Lk (β) ̸= 0 and Lk (λ · eℓ ) = 0 for any non-negative integer
λ < βk,ℓ . Let c̃ be the column vector obtained by deleting the zero coordinates
of (c̄s )s=0,...,tℓ . Then c̃ is a non-zero vector which belongs to the kernel of the
Vandermonde matrix Ṽβk,ℓ −1 obtained by removing the corresponding columns of
Vβk,ℓ −1 . The number of columns of Ṽβk,ℓ −1 equals ρℓ,k + 1 and its number of rows
equals βk,ℓ . Since aℓ0 , aℓ1 , . . . , aℓ,tℓ are distinct this implies that βk,ℓ ≤ ρℓ,k +1−1 =
ρℓ,k . □
To simplify the notation, we will assume from now on, as before, that that q = 1.
Proposition 3.9. Assume that 1 is an isolated solution of the system C · xA = 0.
Then there exists an invertible matrix M ∈ Cn×n such that the system M C.(x +
1)A = 0 is convenient.
Proof. Let ℓ ∈ {1, . . . , n}. By Lemma 3.8, we have c̄k,ℓ,u = 0 for all k = 1, . . . , n
and all u ∈ {aℓ0 , . . . , aℓm+n } if and only if the ℓ-coordinate axis is contained in the
set of solutions of F1 = · · · = Fn = 0, which would imply that 1 is not an isolated
solution of the system C · xA = 0. So assuming that 1 is an isolated solution of
the system C · xA = 0, we obtain the existence of k and u such that c̄k,ℓ,u ̸= 0.
Then, taking suitable (invertible) linear combinations of rows of C we can ensure
that for any k = 1, . . . , n the sum c̄k,ℓ,u is nonzero, and thus that no polynomial
F1 , . . . , Fn vanishes identically on the ℓ-coordinate axis. Repeating the procedure
for ℓ = 1, . . . , n gives an invertible matrix M such that M C · (x + 1)A = 0 is
convenient. □
Proposition 3.10. If ai,0 , . . . , ai,n+m are distinct for i = 1, . . . , n then the system
C.(x + 1)A = 0 is convenient. In general, there exists an invertible matrix M ′ ∈
′
Z(n+1)×(n+1) such that the system C.(x + 1)M A = 0 is convenient.
Proof. Assume first that ai,0 , . . . , ai,n+m are distinct for i = 1, . . . , n . Then by
Lemma 3.8. the polynomial Fk is identically zero on the ℓ-coordinate axis if and
only if ck,0 = · · · = ck,n+m = 0. But no row vector of C is zero since C has maximal
rank and we conclude that C · (x + 1)A = 0 is convenient. The general case follows
from the previous one using the fact that since A has maximal rank there exists
an invertible matrix M ′ ∈ Z(n+1)×(n+1) such that M ′ A has a first row of ones and
distinct coefficients in each other row. □
Remark 3.15. The proof of Proposition 3.14 can be adapted to improve the general
bound (m + n)n for any (n, m) as follows. Assume that for k = 1, . . . , n the
Newton diagram D(Fk ) is convenient. Denote by βk,ℓ the nonzero coordinate of
the intersection point of D(Fk ) with the ℓ-coordinate axis. For any ℓ = 1, . . . , n,
let β̃1,ℓ ≤ · · · ≤ β̃n,ℓ be the integers β1,ℓ , . . . , βn,ℓ ordered increasingly. Then, we
get β̃r,ℓ ≤ m + r for r = 1, . . . , n using the fact the rows of C belong to the kernel
of a Vandermonde matrix (18) and are linearly independent. By monotonicity of
the mixed covolume, we have µ ≤ Vol◦ (∆1 , . . . , ∆n ) where ∆k is the convex hull
of the points βk,ℓ · eℓ , ℓ = 1, . . . , n. Using monotonicity of the mixed covolume, we
see that, in order to bound from above µ, it is sufficient to consider the case where
β̃r,ℓ = m + r for r = 1, . . . , n. We conjecture that the maximal mixed covolume
Vol◦ (∆1 , . . . , ∆n ) is (m + 1)(m + 2) · · · (m + n), which would lead to
µ ≤ (m + 1)(m + 2) · · · (m + n).
m+n
However, we think that a sharp upper bound for µ is n .
Since C has maximal rank, by Gauss elimination (that is, left multiplication by
an invertible matrix) we get a matrix C which contains a invertible diagonal n × n
submatrix. If this can be done without losing the condition of having a convenient
system, we get the following bound.
Corollary 3.16. Let C ∈ Cn×(n+m+1) and A ∈ Z(n+1)×(n+m+1) such that the
system C ·(x+1)A = 0 is convenient and non-degenerate at 0 with local intersection
14 FRÉDÉRIC BIHAN, ALICIA DICKENSTEIN, AND JENS FORSGÅRD
which has maximal rank n + 1. The solutions in the torus of (2) do not change if
we multiply all the polynomials by the monomial x−a0 , which clearly also preserves
the multiplicities. So, without loss of generality we can assume that a0 = 0. Let
B ∈ Z(n+m+1)×m be any Gale dual matrix of A. That is, B is a matrix with
full rank be whose column vectors lie in the integer kernel of A (and are thus a
basis of it over Q). A Gale dual matrix of the coefficient matrix C is a complex
matrix D ∈ C(m+n+1)×m whose column vectors form a basis of the (right) kernel
of C. Clearly, D can be chosen with coefficients in any subfield of C containing the
coefficients of C.
As we assume that 1 = (1, . . . , 1) is a solution to (2). Then,
1A = (1, . . . , 1)
belongs to the kernel of C. Therefore, there exist vectors δ1 , . . . , δn+m ∈ Rm such
that a Gale dual matrix of C has the form
1 δ0
1 δ1
(22) D= . , with δ0 = 0.
..
.. .
1 δn+m
Not every Gale dual matrix D has this particular first row and column.
Definition 4.1. We will say that D is reduced when it has the form (22) with
δ0 = 0.
Two Gale dual matrices of C are obtained from each other by right multiplication
by an invertible matrix. Two reduced Gale dual matrices of C are obtained from
each other by right multiplication by an invertible matrix with first row and first
column vectors both equal to (1, 0, . . . , 0).
Let y = (y1 , . . . , ym ). We associate polynomials of degree 1 in y to the rows of
the reduced matrix D = (dij ):
m
X
pi (y) = di0 + dij yj = 1 + ⟨δi , y⟩, i = 0, . . . , n + m.
j=1
Note that p0 (y) = 1. Choosing another reduced Gale dual matrix of C amounts to
do a linear change of coordinates y ′ = ξ(y), so that ⟨δi , y⟩ = ⟨δi′ , y ′ ⟩, where δi′ is the
image of δi by an invertible linear transformation for i = 0, . . . , m + n.
We introduce the Gale dual rational functions
n+m
Y
(23) φk (y) = pi (y)bik , k = 1 . . . , m.
i=0
n+m
Denote by H the hyperplane arrangement ∪i=1 {pi (y) = 0}. Consider the solutions
of the system
(24) φk (y) = 1, k = 1 . . . , m,
m
in the complement C \ H. Clearing the denominators, this amounts to look at
the solutions in Cm \ H of the polynomial system
Y Y
(25) gk (y) = pi (y)bik − pi (y)−bik = 0, k = 1 . . . , m.
bik >0 bik <0
16 FRÉDÉRIC BIHAN, ALICIA DICKENSTEIN, AND JENS FORSGÅRD
Note that the origin is a solution of (25) and (24) since pi (0) = 1 for all i.
The following result follows essentially from [4, Theorem 2.1]. The only difference
is that here we don’t require that the greatest common divisor of the maximal
minors of A nor the greatest common divisor of the maximal minors of B equal ±1
as in the global case when we translate all the solutions in the torus.
Theorem 4.2. Assume that A = {a0 , a1 , . . . , an+m } ⊂ Zn and let D be a reduced
Gale dual matrix of the coefficient matrix C. Then, system (2) has an isolated
solution at 1 of multiplicity µ if and only if the system g1 (y) = · · · = gm (y) = 0 has
an isolated solution at the origin of the same multiplicity µ. As a consequence, the
multiplicity at the origin of g1 (y) = · · · = gm (y) = 0 depends neither on the choice
of the reduced matrix D nor on the choice of B.
Proof. Multiplying each equation of (2) by x−a0 does not change the multiplicity
at 1, so we can assume that a0 = (0, . . . , 0). As we mentioned before, if the
subgroup ZA = Za1 + · · · + Zan+m of Zn generated by A is equal to Zn and
the columns of B are a Z-basis of the integer kernel of A, the result follows from
[4, Theorem 2.1]. We expand this a bit further. For any x ∈ (C∗ )n such that
n+m
f1 (x) = · · · = fn (x) = 0, there exists a unique y ∈ Cm \ ∪i=1 {pi = 0} such that
ai
x = pi (y) for i = 0, . . . , n + m, because the columns of D are a basis of the
kernel of C and xa0 = 1. This defines a map ψ : x 7→ y = ψ(x) from the set of
solutions in the torus (C∗ )n of f1 = · · · = fn = 0 to the solutions in the torus
Cm \ ∪n+m
i=1 {pi = 0} of (25). Then, by [4, Theorem 2.1], the map ψ is a bijection
which preserves the multiplicities of the solutions, and the result follows then from
the fact that ψ(1) = (0, . . . , 0).
Assume now that ZA ̸= Zn . Since a0 , . . . , an+m are not contained in some
hyperplane ZA has full rank n. By the invariant factor theorem for abelian groups,
there exists a Z-basis (e1 , . . . , en ) of Zn and nonzero integers number t1 , . . . , tn such
that (t1 e1 , . . . , tn en ) is a Z-basis of ZA. After a change of coordinates if necessary,
we might assume without loss of generality that (e1 , . . . , en ) is the standard basis.
a
Then, for j = 0, . . . , n+m and i = 1, . . . , n, the quantity αij = tiji is an integer, and
ti
setting ui = xi we have xaj = uαj for j = 0, . . . , n + m, where αj = (α1j , . . . , αnj )
and u = (u1 , . . . , un ). Therefore, replacing xaj by uαj , the system f1 = · · · = fn = 0
becomes a system with unknowns u, with the same coefficient matrix and with
support {α0 = 0, α1 , . . . , αn+m } ⊂ Zn which satisfies Zα1 + · · · + Zαn+m = Zn . We
are in the previous situation, and thus this system has a root at 1 of multiplicity
µ if and only if the system g1 (y) = · · · = gm (y) = 0 has a root at (0, . . . , 0) of
multiplicity µ. To conclude, it remains to note that 1 is root of multiplicity 1 of
the system 1 = xtii , i = 1, . . . , n. The argument in case the Z-span of the columns
of B is not kerZ (A) is similar. □
Remark 4.3. The fact that the multiplicity at the origin of g1 (y) = · · · = gm (y) = 0
does not depend on the choice of D can be seen directly since a different choice
of a reduced Gale dual matrix D of C corresponds to a different choice of linear
coordinates y = (y1 , . . . , ym ) and the multiplicity at the origin is preserved by the
composition on the right by a linear isomorphism. Note, however, that the Newton
diagrams of g1 , . . . , gm are dependent on the choice of linear coordinates and thus
on the choice of D. They also depend on the choice of B.
The following basic results will be useful. Consider the functions φk and gk
defined in (23) and (25).
SPARSE SYSTEMS WITH HIGH LOCAL MULTIPLICITY 17
n+m
X dij
(26) φk,j (y) = bik · .
i=1
pi (y)
Proof. We first prove that (∂ α gk )(0) = 0 for any nonzero α ∈ Zm≥0 such that α ≤ β if
and only if L∗k (α) = 0 for any nonzero α ∈ Zm ≥0 such that α ≤ β. Let j ∈ {1, . . . , m}
such that ej ≤ β. Applying Lemma 4.5 and Lemma 4.7 we get (∂ α gk )(0) = 0 for
∗
any α ∈ Zm m
≥0 such that ej ≤ α ≤ β if and only if Lk (α) = 0 for any α ∈ Z≥0 such
that ej ≤ α ≤ β. Collecting this for all j such that ej ≤ β leads to the result. From
the previous computation, Lemma 3.4 and using that gk = P − Q = Q · (φk − 1)
we deduce that if L∗k (α) = 0 for any nonzero α ∈ Zm ≥0 such that α < β, then
β |β|−1 ∗
(∂ gk )(0) = (−1) · (|β| − 1)! · Lk (β). □
5. H-duality
In this section we introduce the notion of H-duality of a sparse polynomial
system and we relate it to the notion of Gale duality that we recalled in the previous
section. We prove in Theorem 5.1 the equality of the Newton diagrams at 0 ∈ Cm .
In Subsection 5.1 we highlight the main relations.
By Theorem 4.2, the systems C · (x + 1)A = 0 and the Gale dual system (25)
have the same multiplicity at the origin when 1 is an isolated solution of C · xA = 0.
Given a complex number d, we consider the analytic function xd : C \ R≤0 → C
defined by
xd = ed log(x) ,
where log is the branch of the logarithm defined in C \ R≤0 that taked the value 0
at x = 1. The function (x + 1)d is analytic around 0, where it takes the value 1.
When d ∈ Z, this is just the Laurent polynomial (x + 1)d . Given d ∈ Cn , we can
Qn
similarly define the analytic function xd : (C \ R≤0 )n → C by xd = i=1 xdi i and
substitute x = x + 1 coordinatewise.
Given a Gale dual matrix B of A and a reduced Gale dual matrix D of C as
in (22), we define the system of analytic functions:
m+n
X
(29) hk (y) = bik y δi = 0, k = 1, . . . , m.
i=0
t
We will abbreviate system (29) as B t · y D = 0. Note that this is a system in
codimension many variables. We denote
(30) Hk (y) = hk (y + 1) = 0, k = 1, . . . , m.
t
We will say that the system B t · (y + 1)D = 0 given by 30 is H-dual to the system
t
C · (x + 1)A = 0. Note that C · (x + 1)A = 0 is H-dual to B t · (y + 1)D = 0.
The next result follows from the observation that the quantity L∗k defined in (27)
coincides with the quantity (12) associated to hk (instead of fk ).
Theorem 5.1. Assume that 1 ∈ Cn is an isolated solution of the system C ·xA = 0.
Let B be a Z-Gale dual matrix of A and D be a reduced Gale dual matrix of C. Then
the polynomials g1 (y), . . . , gm (y) defining the Gale dual system and the polynomials
H1 , . . . , Hm defining the H-dual system
t
(31) B t · (y + 1)D = 0
have respectively the same Newton diagrams. Moreover, for any β ∈ Zm
≥0 which lies
L∗ k (β)
on D(Hk ), the coefficient of y β in Hk equals β1 !...βm ! while the coefficient of y β in
L∗ k (β)
gk equals (−1)|β|−1 (|β| − 1)! β1 !...βm!
.
SPARSE SYSTEMS WITH HIGH LOCAL MULTIPLICITY 19
If the Gale dual system and the system (31) are convenient and are non-degenerate
at 0, then all three systems C · (x + 1)A = 0, g1 (y) = · · · = gm (y) = 0 and
t
B t · (y + 1)D = 0 have the same multiplicity at the origin.
T
Proof. Note that 0 ∈ Cm is a solution of the system B T · (y + 1)D = 0 since
the sum of the rows of B is zero. The same is true for the Gale dual system
g1 (y) = · · · = gm (y) = 0 due to Theorem 4.2 and the fact that 1 is a solution of
T
C · xA = 0. The k-th polynomial of the system B T · xD = 0 is the polynomial
hk (x) defined in (29). Observe that L∗k (α) defined in (27) is equal to the Qm toric
derivative θα hk evaluated at the point 1 ∈ Cm , where as before θα = i=1 θiαi
∂
and θi = xi dx i
. Using Lemma 3.3, we get that β ∈ Zm≥0 is a vertex of the Newton
diagram of Hk if and only if L∗k (β) ̸= 0 and L∗k (α) for any nonzero α ∈ Zm ≥0 such
that α < β. Moreover, for any β ∈ Z≥0 which lies on D(Hk ), the coefficient of y β
m
L∗ (β)
in Hk equals β1 !...β
k
m!
. Proposition 4.8 gives the analogous results for gk .
Finally, if the Gale dual system and the system (31) are convenient and non-
degenerate at 0, then they have the same multiplicity at the origin which equals the
mixed covolume of their common Newton diagrams by Theorem 2.3. Moreover, the
systems C · (x + 1)A = 0 and g1 (y) = · · · = gm (y) = 0 have the same multiplicity
at the origin by Theorem 4.2. □
Remark 5.2. The final assertion of Theorem 5.1 is a local statement. Assuming C
has rational coefficients and D has integer coefficients, one might ask if the systems
t
C · (x + 1)A = 0 and B t · (y + 1)D = 0 in Theorem 5.1 have the same number of
nonzero solutions. This is not the case. A simple example can be found already in
the case n = m = 1. Take A = {0, 1, 3} and B = (2, −3, 1)T . Take C = (1, −2, 1) so
that the matrix D with first column (1, 1, 1)T and second column (0, 1, 2)T is Gale
dual to C. Then the system C·xA = 0 equals the√degree 3 polynomial 1−2x+x3 = 0,
which has three simple (real) roots 1 and −1±2 5 . On the other hand the system
T
B T · y D = 0 equals the degree 2 polynomial 2 − 3x + x2 = (x − 1)(x − 2) = 0, with
two (real) roots.
5.1. Viewing all the systems together. Start with a system C · (x + 1)A = 0 of
dimension n and codimension m and having 0 as an isolated solution of multiplicity
µ ≥ 1. Assume that a Gale dual matrix B for A and a reduced Gale dual matrix D
for C are given. We denote the associated Gale system g1 (y) = · · · = gm (y) = 0 by
the compact form (D · y)B = 1. This is a system in m variables and it has 0 as a
t
solution with the same multiplicity µ. The H-dual system B t ·(y +1)D = 0 has the
same Newton diagrams than the Gale system. Its number of variables (dimension)
is m while its codimension is n. It has 0 as a solution of multiplicity µ′ ≥ 1. If the
Gale system (D · y)B = 1 is convenient and non-degenerate, then µ′ ≥ µ. On the
t
other hand, if B t · (y + 1)D = 0 is convenient and non-degenerate, then µ ≥ µ′ .
t
One can also consider a polynomial system Gale dual to B t · (y + 1)D = 0. Assume
now that a0 = (0, . . . , 0). Then the matrix At is a reduced Gale dual matrix of
B t . Moreover C t is Gale dual to Dt . Thus a polynomial system Gale dual to
t t t
B t · (y + 1)D = 0 is (At · x)C = 1. So again, if (At · x)C = 1 is convenient and non-
degenerate then µ ≥ µ′ while if C · (x + 1)A = 0 is convenient and non-degenerate
then µ′ ≥ µ. We resume this in the following diagram
20 FRÉDÉRIC BIHAN, ALICIA DICKENSTEIN, AND JENS FORSGÅRD
Gale map
C · (x +x1)A = 0 −→ (D · y)B
x =1 mult. µ
(32)
y y
t Gale map t
(At · x)C = 1 ←− B t · (y + 1)D = 0 mult. µ′
The horizontal arrows represent Gale duality map for polynomial systems. The
vertical double arrows indicate polynomial systems which have the same Newton
diagrams, and thus the same number of variables. Systems on the first column have
n variables while those on the second column have m variables. The multiplicity at
0 of the systems on the first row equals µ, while the multiplicity at 0 of the systems
on the second row equals µ′ . If at least one system on the first row is convenient
and non-degenerate at 0, then µ ≤ µ′ . If at least one system on the second row is
convenient and non-degenerate at 0, then µ ≥ µ′ .
Recall that two matrices are said left (resp., right) equivalent if one matrix is
obtained from the other by left (resp., right) multiplication by an invertible matrix.
If C̃ and à are left equivalent to C and A, respectively then C · (x + 1)A = 0 and
C̃ · (x + 1)Ã = 0 have the same multiplicity at 0, but not necessarily the same
Newton diagrams. On the other side, if B̃ and D̃ are right equivalent to B and
D, respectively, with both D and D̃ reduced, then (D · y)B = 1 and (D̃ · y)B̃ = 1
have the same multiplicity at 0, but not necessarily the same Newton diagrams.
Similar statements hold true for the systems on the second row of (32). Note that
taking transpose of matrices transforms left equivalence to right equivalence (and
vice versa). Note also that left and right equivalences behave well with respect
to Gale duality for matrices in the sense that if M̃ is left equivalent to M and
Ñ is right equivalent to N then Ñ is a Gale dual matrix for M̃ if and only if
N is a Gale dual matrix for M . Therefore, (32) holds true if if we replace A, C
by left equivalent matrices and B, D by right equivalent matrices (still imposing
that At and D are reduced). Under the assumption that 1 is an isolated solution
and the matrices have full rank, we prove in Proposition 3.9, Proposition 3.10,
Proposition 6.3, and Proposition 6.4 that there always exist equivalent matrices so
that all polynomial systems in (32) are convenient. This lead us to pose Question 1.4
in the Introduction. When n = 1 or m = 1, Question 1.4 has a positive answer
because the Newton diagram of any non-zero univariate polynomial vanishing at
zero is convenient and non-degenerate at zero. A positive answer to Question 1.4 in
general would imply that we always have µ = µ′ in (32) and would imply a positive
answer to Question 1.3.
Example 5.3. Consider the system (x − 1)3 = 0. This is a system in dimension
n = 1 and codimension m = 2. Setting ai = i, i = 0 . . . , 3, we get the matrices
1 1 1 1
A= and C = −1 3 −3 1 .
0 1 2 3
The matrix
1 2
−2 −3
B=
1
0
0 1
SPARSE SYSTEMS WITH HIGH LOCAL MULTIPLICITY 21
is Gale dual to A. Any other Gale dual matrix of A is obtained by right mul-
tiplication of B with an integer matrix with nonzero determinant. Similarly, the
matrix
1 0 0
1 1 0
D= 1
0 1
1 −3 3
is a reduced matrix Gale dual to C. Any other reduced Gale dual matrix of C is
obtained by right multiplication of D with a invertible matrix with real coefficients
and first row and column vectors both equal to (1, 0, 0). Under our choice of B and
D, we have p0 (y) = 1, p1 (y) = 1 + y1 , p2 (y) = 1 + y2 and p3 (y) = 1 − 3y1 + 3y2 .
The associated Gale system is g1 (y) = g2 (y) = 0 with g1 (y) = p2 (y) − p21 (y) =
y2 − 2y1 − y12 and g2 (y) = p3 (y) − p31 (y) = 3y2 − 6y1 − 3y12 − y13 . Thus g1 and g2 have
the same Newton diagram, given by the segment with vertices (1, 0) and (0, 1),
and the corresponding truncated polynomials are y2 − 2y1 and 3y2 − 6y1 (these
truncated polynomials are the polynomials g1ϵ and g2ϵ for ϵ = (1, 1)). So the Gale
system is convenient but it is degenerate at the origin since y2 −2y1 = 3y2 −6y1 = 0
has a solution in (C∗ )2 . In fact, since the Gale system is convenient it should be
degenerate at the origin for otherwise the multiplicity of 0 would be equal to the
corresponding mixed covolume (Theorem 2.3), which equals 1 in that case, but
we know that the multiplicity at the origin of the Gale system is equal to the
multiplicity at 1 of the initial system, which equals 3 by Theorem 4.2. Let us
choose another reduced Gale dual matrix for C. Multiplying on the right the
starting matrix D with an invertible matrix
1 0 0
0 a b
0 c d
gives the reduced Gale dual matrix of C
1 0 0
1 a b
1 c d
1 3(c − a) 3(d − b)
again denoted by D. Then, p0 (y) = 1, p1 (y) = 1 + ay1 + by2 , p2 (y) = 1 + cy1 + dy2
and p3 (y) = 1 + 3(c − a)y1 + 3(d − b)y2 , which gives
g1 (y) = (c − 2a)y1 + (d − 2b)y2 − (ay1 + by2 )2
and
g2 (y) = 3(c − 2a)y1 + 3(d − 2b)y2 − 3(ay1 + by2 )2 − (ay1 + by2 )3 .
If c ̸= 2a and d ̸= 2b, then the Newton diagrams are as before both equal to the
segment with vertices (1, 0) and (0, 1) and we conclude again that the system is
convenient and degenerate at the origin. Assume from now on that c = 2a. Then
d ̸= 2b and a ̸= 0 since ad−bc ̸= 0. Thus the Newton diagrams of g1 and g2 are both
equal to the segment with vertices (2, 0) and (0, 1). The corresponding truncated
system is (d − 2b)y2 − a2 y12 = 3(d − 2b)y2 − 3a2 y12 = 0 and so again the system g1 =
g2 = 0 is convenient and degenerate at the origin. In fact this was expected, since
otherwise the multiplicity at the origin would have been equal to the corresponding
mixed covolume which equals 2 and not 3. The system g1 = g2 − 3g1 = 0 is
22 FRÉDÉRIC BIHAN, ALICIA DICKENSTEIN, AND JENS FORSGÅRD
0 1
we get a Gale system where the Newton diagram of g1 and the corresponding
truncated polynomial is unchanged. We compute that the Newton diagram of g2
is the union of the segments E1 = [(0, 2), (1, 1)] and E2 = [(1, 1), (3, 0)], assuming
that 6b2 + 3d2 − 9bd ̸= 0. In fact the truncation of g2 to E1 is the polynomial
−(6b2 + 3d2 − 9bd)y22 + 3a(2b − d)y1 y2 while the truncation of g2 to E2 is the
polynomial 3a(2b−d)y1 y2 +2a3 y13 . We conclude that this Gale system is convenient
and non-degenerate at the origin. Thus the multiplicity at the origin is equal to
the corresponding mixed covolume (Theorem 2.3). Using Formula 9, we compute
that this mixed covolume equals 3, as expected.
We end this section with an easy observation to detect in terms of the matrices
when the intersection multiplicity is higher than 1. Assume that a0 = (0, . . . , 0) and
let A′ be the matrix with column vectors a1 , . . . , am+n in this order (A′ is obtained
by removing the first row and 0-th column of A) and let C ′ be the matrix C with
0-th column removed. Given any reduced Gale matrix D, let b = (b0 , . . . Q , bm+n ) be
any column vector of B. The corresponding Gale polynomial is g(y) = bi >0 (1 +
⟨δi , y⟩)bi − bi <0 (1 + ⟨δi , y⟩)−bi and we get
Q
m+n
X
g(y) = bi ⟨δi , y⟩ + terms of degree ≥ 2.
i=1
We also set
(35) ρ∗k,ℓ = {u ∈ {d0,ℓ , . . . , dm+n,ℓ } , b̄k,ℓ,u ̸= 0} − 1,
where s∗1 , . . . , s∗m and t∗1 , . . . , t∗m are defined in (36) and in (37).
Proof. This is similar to the proof of Theorem 3.11 using Lemma 6.2. □
Corollary 6.6. Assume that the Gale system g1 = · · · = gm = 0 is convenient and
non-degenerate at 0. If B contains a diagonal invertible matrix of size m × m then
µ ≤ (n + 1)m .
Proof. This follows from Theorem 6.5 noting that s∗k ≤ n + 1 for k = 1, . . . , m if B
contains a diagonal invertible matrix of size m × m. □
Proposition 6.7. Assume that the Gale system g1 = · · · = gm = 0 is convenient
and non-degenerate at 0. Then
µ ≤ (n + m)m .
When m = 2, we have the better bound µ ≤ (n + 1)(n + 2).
Proof. This follows from Theorem 6.5 noting that t∗ℓ ≤ m + n for ℓ = 1, . . . , m. A
proof of the last statement can be obtained using Theorem 5.1 and the proof of
Proposition 3.14. □
Proposition 7.4. Let k = 1, . . . , n. Then D(Fk ) is the union of the bounded faces
of the convex-hull of the union of the sets α + Rn≥0 over all α ∈ Zn≥0 such that
n
X
|α|′ = iαi ≥ m + k.
i=1
′ z m+k
|α| ! · (1 − e ) . As a consequence D(Fk ) intersects the i-th coordinate axis at
the point with i-th coordinate ⌈ m+k i ⌉ for i = 1, . . . , n (where ⌈·⌉ stands for upper
integer part).
26 FRÉDÉRIC BIHAN, ALICIA DICKENSTEIN, AND JENS FORSGÅRD
Proposition 7.5. Consider the system (5). For n = 2 and any positive integer
number m, the multiplicity at 1 is equal to m+2
2 .
Proof. Assume n = 2. We treat the case m odd as the case m even is similar. Set
a = m+12 . We apply Proposition 7.4. We compute that D(F1 ) is the segment E1 =
[(0, m+1
2 ), (m + 1, 0)] and the truncation of F1 to E1 is equal, up to a multiplicative
constant, to the polynomial
a
X
a−i 1
F̃1 (x1 , x2 ) = ci · x2i
1 x2 , where ci = .
i=0
(2i)! · (a − i)!
Then, we compute that D(F2 ) is the union of the segments E2,1 = [(0, m+3 m+1
2 ), (1, 2 )]
m+1
and E2,2 = [(1, 2 ), (m + 2, 0)]. Moreover, the truncation of F2 to E2,2 is equal,
up to a multiplicative constant, to
a
X 1
F̃2 (x1 , x2 ) = x1 · c′i · x2i a−i
1 x2 , where c′i = .
i=0
(2i + 1)! · (a − i)!
The truncation of F2 to the other segment E2,1 is a binomial polynomial (we
will not need to know its coefficients). Using Formula (9), one can check that
Vol◦ (∆1 , ∆2 ) = m+2 2 , where ∆k is the Newton polytope of Fk , see Figure 2.
Thus, by Theorem 2.3, in order to prove that the multiplicity at the origin of the
system F1 = F2 = 0 is m+2
2 , it suffices to prove that this system is non-degenerate
at the origin. For this it suffices to prove that the system F̃1 = F̃2 = 0 has no solu-
tion in (C∗ )2 . Using the monomial change of coordinate Pa x = x21 x−1
2 , we are
Preduced
a
to show that the univariate polynomials ℓ1 (x) = i=0 ci xi and ℓ2 (x) = i=0 c′i xi
have no common complex root. It turns out that, up to scalar multiplication and
change of variable, these polynomial are known as Laguerre polynomials. In fact,
they are also Hermite polynomials, up to another change of variables and scalar
multiplication. Explicitely,
1 x −1 x
ℓ1 (x) = s1 · La2 (− ) and ℓ2 (x) = s2 · La 2 (− ),
4 4
SPARSE SYSTEMS WITH HIGH LOCAL MULTIPLICITY 27
Qa−1 Qa−1
where s1 is the inverse of i=0 (a + 12 − i) and s2 is the inverse of i=0 (a − 12 − i).
1
−1
It is known by [7, Theorem 2.3] that the Laguerre polynomials La (x) and La 2 (x)
2
have only real roots (otherwise said, they are hyperbolic polynomials) and that
their roots interlace. In particular, they have different roots. We conclude that the
multiplicity at the origin of the system F1 = F2 = 0 is m+2 2 .
□
a+1
a + a
2a 1 2a+1
2a+1
2a
= a
1 2a+1 4a+1
It is interesting to compare the value of the maximum multiplicity at one point
m+2
2 for n = 2 and any codimension m with the normalized volume of the cyclic
polytope P2,m which is the convex hull of the corresponding support set, that
bounds the number of isolated solutions. Using the Online Encyclopedia of Integer
Sequences (https://oeis.org/), we have that volZ (P2,m ) = 2 m+3
3 , a polynomial
of degree 3 in m. On the other side m+2
2 is only a polynomial of degree 2 in m,
so the difference tends to infinity with m.
Remark 7.6. We showed in Section 5 that H-duality associates to a polynomial
t
system C.(x + 1)A = 0 vanishing at 0 ∈ Cn a polynomial system B t .(y + 1)D = 0
vanishing at 0 ∈ Cm , where B and D are matrices (with D reduced) which are
Gale dual to A and C respectively. Proposition 7.3 reveals that H-duality just
permutes the dimension and the codimension in the family of systems (39) with
high multiplicity (up to left equivalence for matrices).
We end with a suggestion for future work.
Remark 7.7. We expect that the local multiplicity at 1 of the system (5) is equal to
n+m
m for any positive integers numbers n, m. When m = 2 a proof would follow
closely the proof of Proposition 7.5 using Theorem 5.1, Proposition 7.3, see also
28 FRÉDÉRIC BIHAN, ALICIA DICKENSTEIN, AND JENS FORSGÅRD
Remark 7.6 (the Newton diagrams are the same, only the coefficients are slightly
different).
λ0 , λ1 , . . . , λn+1 are all non-zero. It follows then from Proposition 4.8 that
n+1
X
(43) λj dij = 0, i = 1, . . . , n.
j=1
Thus the matrix A and the matrix (dij )0≤i≤n,0≤j≤n+1 have the same Gale dual
matrix B. Equivalently both matrices are equal up to left multiplication by an in-
vertible matrix, in other terms, A and the cyclic configuration {γ(d0 ), . . . , γ(dn+1 )}
are equal up to affine transformation.
Conversely, assume that A is equal up to affine tranformation to a cyclic config-
uration {γ(d0 ), . . . , γ(dn+1 )}, where all di are distinct. Then A is equal up to affine
tranformation to the cylic configuration {γ(0), γ(d1 − d0 ), . . . , γ(dn+1 − d0 )} since
both cyclic configurations share the same affine relations. Then taking for C the
transpose of any Gale dual matrix of the matrix with a first row the all 1’s vector
and as a second row the vector (0, d1 − d0 , . . . , dn+1 − d0 ) we get a system having
1 as a solution of multiplicity n + 1, by reversing the previous arguments. □
Example 8.4. Assume that A = {(0, 0), (1, 0), (1, 1), (0, 1)}. Then it is easy to show
that A is not the image by an affine transformation of a cyclic point configuration
{γ(d0 ), . . . , γ(d3 )}, where all di are distinct. Thus, the maximal multiplicity of a
system C · xA at a point q ∈ (C∗ )2 is at most 2. In fact this maximal multiplicity
is equal to 2, see Example 3.19.
System (6) has real coefficients, and it lead us to pose Question 1.2. To our
knowledge, a positive answer to this question would give a record number of positive
solutions for any values of m, n such that m ≥ 3 and n ≥ 2. The answer is trivially
yes for n = 1.
When m = 1 and n is any positive integer, one tool used to get systems reaching
the upper bound n + 1 for the multiplicity is the theory of ”real dessins d’enfant”.
We are going to show below how to use real dessins d’enfant to affirmatively answer
Question 1.2 in the case m = 1 (and any n). We recall some needed facts about real
dessins d’enfant, a more complete description can be found in [5] and the references
therein.
Consider any polynomial map Φ = (P, Q) : CP 1 → CP 1 , where P, Q are homoge-
neous polynomials of the same degree d. It is convenient to write Φ(x) = (φ(x) : 1)
P (x)
with φ(x) = Q(x) when Q(x) ̸= 0. Assume that P and Q have real coefficients.
Then Φ is a real map, that is, we have Φ(x) = Φ(x̄) where the bar symbol denote the
(coordinatewise) complex conjugation of CP 1 . The real dessin d’enfant associated
to Φ is Γ = Φ−1 (RP 1 ). This is a graph on CP 1 with vertices the critical points x of
Φ such that Φ(x) ∈ RP 1 (real critical values) and edges which form the pre-image
of segments between consecutive two critical values in RP 1 . The inverse image of
(0 : 1) (resp., (1 : 0), (1 : 1)) is the set of roots of P (resp., Q, g = P − Q). We will
denote all roots of P (resp., Q, g = P − Q) by the same letter p (resp., q, r). The
graph Γ equipped with letters p, q, r has the following properties. It is invariant
under complex conjugation. The restriction of Φ to any connected component of
CP 1 \ Γ is a covering of one connected component of CP 1 \ RP 1 . The degree of
this covering is given by the number of letters p (resp., q, r) in the boundary and
these letters p, q, r are arranged consistently with the arrangement of their images
(0 : 1) (resp., (1 : 0), (1 : 1) on RP 1 (for instance, an open arc joigning letters p and
q cannot contain the letter r and no other letters p, q). It follows that the degree
30 FRÉDÉRIC BIHAN, ALICIA DICKENSTEIN, AND JENS FORSGÅRD
of Φ is half the sum of all these local degrees. Moreover, the valency of a vertex of
Γ is an even integer number, in fact this is twice the multiplicity of this vertex as
a critical point of Φ.
Conversely, any graph on CP 1 marked with letters p, q, r which satisfies the
previous properties is the real dessin d’enfant associated to a real polynomial map
Φ : CP 1 → CP 1 by Riemann uniformization Theorem (see [5] and the references
therein for more details) whose degree can be computed as described before.
Let A ⊂ Zn be a set of dimension n codimension m. Consider a polynomial sys-
tem C · X A = 0 with support
Q A and real coefficient matrix C. The corresponding
system (23) has the form i=0 pi (y)λi = 1 where all pi are real degree one polyno-
mials given by the row of a matrix D which is Gale dual to C and the exposants
λi are integer coefficients in any non trivial affine relation on A. So the solutions
of the Gale system are the pre-image of 1 under this rational function, that is, the
marked points r on the corresponding real dessin d’enfant. Note that all marked
points p and q belong to RP 1 since the numerator and denominator of the rational
functions are product of real polynomials of degree one. Conversely, any map Φ
with this property can be seen as a rational map coming, via Gale duality, from
a real polynomial system of dimension n and codimension 1. Given a real linear
transformation τ of the source space CP 1 , one can consider the image of Γ by τ
and mark by the same letter p, q or r the image of any marked point of Γ. Choosing
another Gale dual matrix D for C is the same as applying such a transformation,
so we see a real dessin d’enfant up to this group action to get rid off the dependence
on the choice of D.
Proposition 8.5. Let A ⊂ Zn be a set of dimension n and codimension 1, and let
C ∈ Rn×(n+2) be a matrix such that the corresponding polynomial system C ·X A = 0
has a solution at 1 of multiplicity n + 1. Then there exists a small perturbation
C̃ ∈ Rn×(n+2) of C such that the polynomial system C̃ · X A = 0 has n + 1 positive
solutions.
Proof. Choose any non trivial affine relation with integer coefficients λi on A.
Choose any reduced Gale matrix D of C and consider the rational function φ = φ1
given by (23) (here m = 1). Recall that pi (y) = 1 + di y for i = 0, . . . , n + 1,
where the (1, di ) are the rows of D. Then, by Theorem 4.2 we have that y = 0 is
a critical point of multiplicity n + 1 of φ with critical value 1, which corresponds
to the solution (1, . . . , 1) of the system C · X A = 0. Let Γ be the associated real
dessin d’enfant. The critical point y = 0 is a marked point of Γ, with a letter r,
and it is a vertex of Γ with valency 2(n + 1). By Gale duality Theorem for posi-
tive solutions (see [1]), there is a bijection (the restriction of the map ψ defined in
the proof of Theorem 4.2) between the set of positive solutions of the polynomial
system C · X A = 0 and the set of marked points with letters r contained in the
interval I = {y ∈ R , pi (y) > 0 for all i}. Since y = 0 is the image of x = (1, . . . , 1)
by the Gale map, we see that I is the open interval in RP 1 containing the marked
point y = 0 and no marked points with letters p and q. We might deform slightly
Γ in a neighbourhood U of the point y = 0 in order to get a real dessin d’enfant
Γ with n + 1 marked points with letter r in the interval I (and n additional ver-
tices of valency 2 on RP 1 in I, one between two consecutive letters r). This is
the local deformation of real dessins d’enfant called splitting in [5] (see Figure 3
which corresponds to the case n = 2). Then, the resulting real dessin d’enfant has
n + 1 marked point with letter r in I, and the marked points with letters p and
SPARSE SYSTEMS WITH HIGH LOCAL MULTIPLICITY 31
q still belong to RP 1 and have the same valencies. It follows that the resulting
real dessin d’enfant corresponds to a real polynomial system with same support
(since the affine relation is preserved) having the maximal number n + 1 of positive
solutions and which is a small perturbation of the starting system.
□
r r r r
n+µ−1
kernel of the matrix A(µ−1) ∈ Z( n )×(n+m+1) in Definition 1.5. Clearly, this
matrix can have full row rank and non-trivial kernel only when n+µ−1
n ≤ (n + m).
Lemma 9.1. Fix n and let p be the univariate polynomial p(µ) = n+µ−1
n −(n+m).
Then p has a unique positive root µ0 (n, m).
As p(0) < 0 and its leading coefficient is positive, p has at least one positive root.
It has exactly one because of Descartes’ rule of signs since the sign variation of its
coefficients is equal to 1. Note that for any k ∈ N, p(k) ≤ 0 when k ≤ µ0 (n, m) and
p(k) > 0 only when k > µ0 (n, m).
Definition 9.2. Let n, m natural numbers. The associated naive bound σ(n, m)
is defined as the floor ⌊µ0 (n, m)⌋, that is, as the highest (positive) integer smaller
or equal than µ0 . The bound b(n, m) is defined as 1 + ⌈ m n ⌉, that is, as the smallest
(positive) integer greater or equal than 1 + m n.
It follows that the multiplicity of fn+m,n at (1, . . . , 1) is equal to the smallest s such
that s · n ≥ n + m. This finishes the proof. □
The assumptions in Theorem 9.4 are not optimal. That is, there are plenty of
non-uniform point configurations A for which one can find a proof using the two
cases of the induction step. This is the case of configurations of dimension 2, as we
show in Theorem 9.6 below. On the other hand, the bound b(n, m) does not hold
in general (see Example 9.7).
Note that for any fixed n, n+µ−1
n is a polynomial in µ of degree n, while b(n, m)
is linear in m. It follows that after a value of m, we have that the naive bound
34 FRÉDÉRIC BIHAN, ALICIA DICKENSTEIN, AND JENS FORSGÅRD
σ(n, m) is strictly smaller than our bound b(n, m). Moreover, we have shown in
Proposition 9.5 that the bound b(n, m) is attained.
Proof. While a point configuration A of dimension 2 need not fulfill the genericity
assumption of Theorem 9.4, we show that one can always find a hyperplane H as
in one of the two cases in the induction step in the proof of the theorem.
If all edges of the Newton polytope ∆(A) contain at least three points, then we
can choose H as the affine span of one edge of ∆(A); the complement A \ H is
full-dimensional since ∆(A) has at least three edges.
Assume that there is an edge of ∆(A) which contains exactly two points. Let
H be the affine span of this edge. There are two possibilities. Either A \ H is
full-dimensional, or A \ H is contained in a line L. In the latter case, A \ L consists
of the vertices of a simplex. The proof of the last statement is straightforward and
we only illustrate it in Table 9. □
m σ(2, m) b(2, m)
1 2 2
2 2 2
3 3 3
4 3 3
5 3 4
6 3 4
7 3 5
8 4 5
9 4 6
10 4 6
Table 1. Comparing bounds for n = 2.
We end with an example showing that the bound from Theorem 9.4 does not
necessarily hold for non-generic uniform configurations in dimension n = 3.
.
SPARSE SYSTEMS WITH HIGH LOCAL MULTIPLICITY 35
References
[1] F. Bihan and F. Sottile. New fewnomial upper bounds from Gale dual polynomial systems,
Mosc. Math. J. 7:3 , 387–407, 2007.
[2] F. Bihan, A. Dickenstein and J. Forsgard. Optimal Descartes’ rule of signs for systems
supported on circuits, Math. Ann. 381, no. 3-4, 1283–1307, 2021.
[3] F. Bihan, F. Santos and P.-J. Spaenlehauer. A Polyhedral Method for Sparse Systems with
Many Positive Solutions, SIAM J. Appl. Algebra Geometry 2,no 4, 2018.
[4] F. Bihan and F. Sottile. Gale duality for complete intersection, Ann. Inst. Fourier 58,
no. 3, 877–891, 2008.
[5] F. Bihan Maximally positive polynomial systems supported on circuits, J. Symbolic Com-
put. 68, part 2, 61–74, 2015.
[6] A. Dickenstein, R. Piene: Higher order selfdual toric varieties, Annali Mat. Pur. Appl.
195, n.5, 1759–1777, 2017.
[7] K. Driver and K. Jordaan Interlacing of zeros of shifted sequences of one-parameter or-
thogonal polynomials, Numer. Math. 107, no.4, 615–624, 2007.
[8] A. Esterov: Determinantal singularities and Newton polyhedra, Proc. Steklov Inst. Math.
259, no.1, 16–34, 2007.
[9] J. Forsgård: Defective dual varieties for real spectra, J. Algebraic Combin. 49, n. 1, 49–67,
2019.
[10] B. Haas: A Simple Counterexample to Kouchnirenko’s Conjecture, Beitr´’age zur Algebra
und Geometrie 43, n. 1, 1–8, 2002.
[11] M. Herrero, G. Jeronimo, J. Sabia: On the multiplicity of isolated roots of sparse polyno-
mial systems, Discrete Comput. Geom. 62, no. 4, 788–812, 2019.
[12] P. Koiran, M. Skomra: Intersection multiplicity of a sparse curve and a low degree curve,
Journal of Pure and Applied Algebra, 224(7), 2020.
[13] A. Gabrielov: Multiplicities of pfaffian intersections, and the Lojasiewicz inequality, Se-
lecta Mathematica: New Series. March 1995 1(1):113-127.
[14] I. M. Gelfand, M. M. Kapranov, A. V. Zelevinsky: Discriminants, resultants, and multi-
dimensional determinants. Mathematics Theory & Applications. Birkhäuser Boston, Inc.,
Boston, MA, 1994.
[15] G. Hajós: Solution to problem 41 (in Hungarian). Mat. Lapok, 4:40–41, 1953.
[16] K. Kaveh, A. Khovanskii: Convex bodies and multiplicities of Ideals, Proc. Steklov Inst.
Math. 286, 268–284, 2014.
[17] A. Khovanskii: Fewnomials, Translations of Mathematical Monographs Vol. 88, AMS,
1991.
[18] A. G. Kouchnirenko: Polyèdres de Newton et nombres de Milnor. Invent math 32, 1–31,
1976.
[19] P. Mondal Intersection multiplicity, Milnor number and Bernstein’ theorem. Available at
https://arxiv.org/abs/1607.04860.
[20] P. Mondal: How many zeroes? Counting solutions of systems of polynomials via toric
geometry at infinity, CMS/CAIMS Books in Mathematics, 2. Springer, Cham, 2021.
[21] I. Nikitin: Bivariate systems of polynomial equations with roots of high multiplicity, J.
Algebra Appl. 22, No. 01, 2350014, 2023.
[22] G. Ziegler: Lectures on polytopes, Graduate Texts in Mathematics, 152, Springer-Verlag,
New York, 1995.