Academia.eduAcademia.edu

On tau functions for orthogonal polynomials and matrix models

2011, Journal of Physics A: Mathematical and Theoretical

Let v be a real polynomial of even degree, and let ρ be the equilibrium probability measure for v with support S; so that, v(x) ≥ 2 log |x−y| ρ(dy)+C v for some constant C v with equality on S. Then S is the union of finitely many bounded intervals with endpoints δ j , and ρ is given by an algebraic weight w(x) on S. The system of orthogonal polynomials for w gives rise to the Magnus-Schlesinger differential equations. This paper identifies the τ function of this system with the Hankel determinant det[ x j+k ρ(dx)] n−1 j,k=0 of ρ. The solutions of the Magnus-Schlesinger equations are realised by a linear system, which is used to compute the tau function in terms of a Gelfand-Levitan equation. The tau function is associated with a potential q and a scattering problem for the Schrödinger operator with potential q. For some algebro-geometric q, the paper solves the scattering problem in terms of linear systems. The theory extends naturally to elliptic curves and resolves the case where S has exactly two intervals. MSC (2000) classification: 60B20 (37K15)

On tau functions for orthogonal polynomials and matrix models Gordon Blower arXiv:1008.2352v1 [math.CA] 13 Aug 2010 Department of Mathematics and Statistics Lancaster University Lancaster, LA1 4YF England [email protected] 13th August 2010 Abstract. Let v be a real polynomial of even degree, and let ρ be the equilibrium probabilR ity measure for v with support S; so that, v(x) ≥ 2 log |x−y| ρ(dy)+Cv for some constant Cv with equality on S. Then S is the union of finitely many bounded intervals with endpoints δj , and ρ is given by an algebraic weight w(x) on S. The system of orthogonal polynomials for w gives rise to the Magnus–Schlesinger differential equations. This paper R identifies the τ function of this system with the Hankel determinant det[ xj+k ρ(dx)]n−1 j,k=0 of ρ. The solutions of the Magnus–Schlesinger equations are realised by a linear system, which is used to compute the tau function in terms of a Gelfand–Levitan equation. The tau function is associated with a potential q and a scattering problem for the Schrödinger operator with potential q. For some algebro-geometric q, the paper solves the scattering problem in terms of linear systems. The theory extends naturally to elliptic curves and resolves the case where S has exactly two intervals. MSC (2000) classification: 60B20 (37K15) Keywords: Random matrices, Scattering theory 1. Introduction This paper concerns systems of orthogonal polynomials that arise in random matrix theory, specifically in the theory of the generalized unitary ensemble [26], and may be described in terms of electrostatics. We consider a unit of charge to be distributed along an infinite conducting wire in the presence of an electrical field. The field is represented by a real P2N polynomial v(x) = j=0 aj xj such that a2N > 0, while the charge is represented by a Radon probability measure on the real line. Boutet de Monvel et al [5, 29] prove the existence of the equilibrium distribution ρ that minimises the electrostatic energy. Under general conditions which include the above v, they prove that there exists a constant Cv such that Z v(x) ≥ 2 log |x − y| ρ(dy) + Cv (x ∈ R) (1.1) S 1 and that equality holds if and only if x belongs to a compact set S. Furthermore, there exists g ≥ 0 and −∞ < δ1 < δ2 ≤ δ3 < . . . < δ2g+2 < ∞ (1.2) S = ∪g+1 j=1 [δ2j−1 , δ2j ] (1.3) such that It is a tricky problem, to find S for a given v, and [10, Theorem 1.46 and p. 408] contains some significant results including the bound g + 1 ≤ N + 1 on the number of intervals. When v is convex, a relatively simple argument shows that g = 0, so there is a single interval [21]. Definition. The nth order Hankel determinant for ρ is Dn = det hZ xj+k ρ(dx) S in−1 j,k=0 . (1.4) In section 3 we introduce the system of orthogonal polynomials for ρ, and in section 4, we regard Dn as a function of δ = (δ1 , . . . , δ2g+2 ), and derive a system for differential equations for log Dn , known as Schlesinger’s equations. Let A(z) be a proper rational 2 × 2 matrix function with simple poles at δj ; let αj be the residue at δj , and suppose that the eigenvalues of αj are distinct modulo the integers for j = 1, . . . , M . Consider the differential equation d Φ = A(z)Φ(z), dz (1.5) and introduce the 1-form M 1X Ω(δ) = trace residue(A(z)2 : z = δj ) dδj 2 j=1 (1.6) to describe its deformations. Then Ω turns out to be closed by [19]. Definition. (i) The tau function of the deformation equations associated with (1.5) is τ : CM \ {diagonals} → C such that d log τ = Ω. (ii) Given a self-adjoint and trace-class operator K : L2 (ρ) → L2 (ρ) such that 0 ≤ K ≤ I, and P(t,∞) the orthogonal projection f 7→ I[t,∞) f , the tau function of K is τ (t) = det(I − P[t,∞) K). (The definitions in (i) and (ii) are reconciled by Proposition 3.3.) In section 5 use the results from preliminary sections to prove that Dn gives the appropriate τ function for Schlesinger’s equations. As an illustration which is of importance in random matrix theory, we calculate the tau function explicitly when ρ is the semicircular 2 law. When S is the union of two intervals, the Schlesinger equations reduce to the Painlevé VI equation, as we discuss on section 7. Okamoto derived τ functions for other Painlevé equations in [27]. See also [15] and [2]. In sections 2,3 and 4, we develop standard arguments, then our analysis follows that of Chen and Its [8], who considered the ρ that is analogous to the Chebyshev distribution on multiple intervals. Chen and Its found their tau function explicitly in terms of theta functions on a hyperelliptic Riemann surface; in this paper, we show by means of scattering theory why such solutions emerge. In terms of definition (ii), we have a real τ (x) which is associated with a compactly 2 d supported real potential q(x) = −2 dx 2 log τ (2x) and hence the Schrödinger differential 2 d operator − dx 2 + q(x). One associates with each smooth q a scattering function φ; then 2 d one analyses the spectral data of − dx 2 + q(x) in terms of φ, with a view to recovering q. The Gelfand–Levitan integral equation links φ with q. In random matrix theory, tau functions are introduced alongside integrable kernels that describe the distribution of eigenvalues of random matrices, especially the generalized unitary ensemble; see [14, 26, 31]. Let X be a n × n complex Hermitian matrix, let λ = (λ1 ≤ λ2 ≤ . . . ≤ λn ) be the corresponding eigenvalues, listed according to multiplicity, and Pn consider the potential V (X) = n−1 j=1 v(λj ). Now let dX be the product of Lebesgue measure on the entries that are on or above the leading diagonal of X; then there exists 0 < Zn < ∞ such that  νn(2) (dX) = Zn−1 exp −n2 V (X) dX (1.7) defines a probability measure on the n × n complex Hermitian matrices. There is a natural (2) action of the unitary group U (n) on Mn given by (U, X) 7→ U XU † , which leaves νn (2) invariant. Hence νn is the generalized unitary ensemble with potential v. There exists a constant ζn such that ρn (dx) = ζn−1 e−nv(x) dx defines a probability measure on R; then we let Ekρn be the orthogonal projection onto span{xj : j = 0, . . . , k−1} in L2 (ρn ). The eigenvalue distribution satisfies Z 1 ♯{j : λj (X) ≤ t}νn(2) (dX) = det(I − I(t,∞) Enρn ) n (1.8) where the right-hand side can be expressed in terms of Hankel determinants. For large n, most of the eigenvalues actually lie in S by results of [5, 26]. Moreover, there exists a trace-class operator K on L2 (ρ) such that 0 ≤ K ≤ I and det(I − I(t,∞) Enρn )) → det(I − I(t,∞) K) 3 (n → ∞); (1.9) call this limit τ (t). Tracy and Widom [31] showed how to express such determinants in terms of systems of differential equations and Hankel operators. We introduce the matrix   −1 , 0 0 J= 1 and apply a simple gauge transformation to (1.5). Then for a sequence of real symmetric 2 × 2 matrices Jβk (n), we consider solutions of the differential equation 2g+2 X Jβk (n) dZ J = Z, dx x − δk (1.10) k=1 Z(x) → 0 (x → δj ), K(x, y) = Z(y)† JZ(x) . y−x and form the kernel (1.11) We show that the properties of K depend crucially upon the sequence of signatures of the matrices (δj − δk )Jβk (n). In Theorem 8.3, we introduce a symbol function φ from Z, a constant signature matrix σ and a Hankel operator Γφ such that K = Γ†φ σΓφ . In section 9 we introduce φ from (1.11), express φ in terms of a linear system as in [4] and hence obtain a matrix Hamiltonian H(x) such that ∞  Z τ (2x) = exp − x  trace H(u) du , (1.12) 2 d and prove that q(x) is meromorphic on a region. We regard − dx 2 + q as integrable if −f ′′ +qf = λf can be solved by quadratures for typical λ. This imposes severe restrictions upon q; indeed, Gelfand, Dikij and Its [12, 6] showed that the integrable cases arise from finite-dimensional Hamiltonian systems. In sections 10, 11 and 12 we consider cases in which −f ′′ + qf = λf has a meromorphic general solution for all λ, and q satisfies one of the following conditions: (i) q is rational and bounded at infinity; (ii) q is of rational character on an elliptic curve; (iii) q is the restriction of an abelian function to a straight line in the Jacobian of a hyperelliptic Riemann surface. In cases (ii) and (iii), the corresponding Schrödinger equation has a spectrum with only finitely many gaps. In (i) and (ii), we introduce a linear system (−A, B, C) so as to realise φ(x) = Ce−xA B, and use the operators A, B, C to solve the Gelfand–Levitan 4 equation. Thus we obtain explicit expressions for φ and τ . In (iii), we can do likewise under further hypotheses. 2. The equilibrium measure Given the special form of the potential, the equilibrium measure and its support satisfy special properties. To describe these, we introduce the polynomial u of degree 2N − 2 by Z u(z) = S v ′ (z) − v ′ (x) ρ(dx) z−x (2.1) and the Cauchy transform of ρ by R(z) = Z S ρ(dx) x−z (z ∈ C \ S) (2.2) and the weight  2g+2 w(x) = 2N a2N −Q(x) Y j=1 (x − δ2j−1 )(x − δ2j ) 1/2 (2.3) where Q(x) is a product of monic irreducible quadratic factors such that w(x)2 = 4u(x) − v ′ (x)2 . Proposition 2.1 (i) The Cauchy transform is the algebraic function that satisfies R(z)2 + v ′ (z)R(z) + u(z) = 0 (2.4) and R(z) → 0 as z → ∞. There exist nonzero polynomials u0 , u1 , u2 such that u0 R′ = u1 R + u2 . (ii) The support of ρ is S = {x ∈ R : 4u(x) − v ′ (x)2 ≥ 0}. (2.5) (iii) ρ is absolutely continuous and the Radon–Nikodym derivative satisfies dρ 1 = IS (x)w(x) dx 2π where 2π = R S (2.6) w(t)dt and w(x) → 0 as x tends to an endpoint of S. Proof. (i) The quadratic equation is due to Bessis, Itzykson and Zuber, and is proved in the required form in [28]. One can easily deduce that R satisfies a first-order linear differential equation with polynomial coefficients. 5 (ii) Pastur [28] shows that the support is those real x such that |v ′ (x) + p v ′ (x)2 − 4u(x)|2 = 4u(x), (2.7) and this condition reduces to 4u(x) ≥ v ′ (x)2 and u(x) ≥ 0, where the former inequality implies the latter. The polynomial 4u(x) − v ′ (x)2 has real zeros δ1 , . . . , δ2g+2 , and may additionally have pairs of complex conjugate roots, which we list as δ2g+3 , . . . , δ4N−2 with regard to multiplicity. Hence we can introduce w as above such that 4u(x)−v ′ (x)2 = w(x)2 . (iii) From (i) we deduce that 1 R(λ) = 2πi Z p S 4u(t) − v ′ (t)2 dt t−λ (2.8) since both sides are holomorphic on C \ S, vanish at infinity and have the same jump across S. By Plemelj’s formula, we deduce that Z p 4u(t) − v ′ (t)2 dt ′ (λ ∈ S). (2.9) v (λ) = 2p.v. λ−t 2π S See [28, 5, 29]. This gives the required expression for ρ. 3. Orthogonal polynomials First we introduce orthogonal polynomials for ρ, then the corresponding differential 2 equations. Let (pj )∞ j=0 be the sequence of monic orthogonal polynomials in L (ρ), where pj has degree j and let hj be the constants such that Z pj (x)pk (x)ρ(dx) = hj δjk ; (3.1) S and let (qj )∞ j=1 be the monic polynomials of the second kind, where qj (z) = Z S pj (z) − pj (x) ρ(dx) z−x (3.2) has degree j − 1. On account of Proposition 2.1, the orthogonal polynomials are semi classical in Magnus’s sense [22], although the weight typically lives on several intervals. The following result is standard in the theory of orthogonal polynomials; see [8]. R xpn (x)2 ρ(dx). Then Lemma 3.1 Let cn = hn /hn−1 and bn = h−1 n S (i) the polynomials (pn )∞ n=0 satisfy the three-term recurrence relation xpn (x) = pn+1 (x) + bn+1 pn (x) + cn pn−1 (x); 6 (3.3) (ii) the polynomials (qj )∞ j=1 likewise satisfy (3.3); (iii) the Hankel determinant of (1.4) satisfies Dn = h0 h1 . . . hn−1 . (3.4) We introduce also Yn (z) = and " pn (z) pn−1 (z) hn−1 pn (t)ρ(dt) z−t S R pn−1 (t)ρ(dt) 1 hn−1 S z−t R  #  −hn . 0 z − bn+1 Vn (z) = 1/hn (3.5) (3.6) Proposition 3.2 (i) The matrices satisfy the recurrence relation Yn+1 (z) = Vn (z)Yn (z). (3.7) (ii) The matrix Yn (z) is invertible, and det Yn (z) = 1. Proof. (i) This follows from (i) and (ii) of the Lemma 3.1. (ii) This follows by induction, where the induction step follows from the recurrence relation in (i). We restrict ρ restrict to (−∞, t) ∩ S and let µj (t) = Z xj ρ(dx) (3.8) S∩(−∞,t) be the j th moment; the corresponding Hankel determinant is  n Dn+1 (t) = det µj+k (t) j,k=0 . (3.9) Let En : L2 (ρ) → span{xk : k = 0, . . . , n − 1} be the orthogonal projection; we also introduce the projection P(t,∞) on L2 (ρ) given by multiplication f 7→ I(t,∞) f , where I(t,∞) denotes the indicator function of (t, ∞). Proposition 3.3 The tau function of En+1 satisfies det(I − En+1 P(t,∞) ) = 7 Dn+1 (t) . Dn+1 (3.10) Proof. We introduce an upper triangular matrix [aℓ,j ]nj,ℓ=0 with ones on the leading Pn diagonal such that pj (x) = ℓ=0 aℓj xℓ . Then we can compute det[µj+k (t)]nj,k=0 = det[aℓ,j ]nℓ,j=0 det[µj+k (t)]nj,k=0 det[ak,m ]m k,m=0 in hZ t pj (x)pk (x)ρ(dx) = det j,k=0 −∞ (3.11) We can also express the operators on L2 (ρ) as matrices with respect to the orthonormal p basis (pj / hj )nj=0 , and we find En+1 − En+1 P(t,∞) En+1 so that det hZ t pj (x)pk (x)ρ(dx) −∞ in j,k=0 1 ↔ p hj hk h Z t pj (z)pk (z)ρ(dz) −∞ in j,k=0 = det(En+1 − En+1 P(t,∞) En+1 )h0 . . . hn . (3.12) (3.13) We deduce that det[µj+k (t)]nj,k=0 = det(En+1 − En+1 P(t,∞) En+1 )Dn+1 . (3.14) 4. Schlesinger’s equations and recurrence relations Invoking Proposition 3.2(ii), we introduce the matrix function An (z) = Yn′ (z)Yn (z)−1   0 0 + Yn (z) Y (z)−1 . 0 −w′ (z)/w(z) n (4.1) The basic properties of An (z) are stated in (i) of the following Lemma, while (ii) gives detailed information that we need in the subsequent proof of Theorem 5.1. Lemma 4.1 (i) Let v ′ (z)2 − 4u(z) have zeros at δj for j = 1, . . . , 4N − 2. Then An (z) is a proper rational function so that An (z) = 4N−2 X j=1 αj (n) , z − δj where the residue matrices αj (n) depend implicitly upon δ. 8 (4.2) (ii) The (1, 2) and diagonal entries of the residue matrices satisfy 4N−2 X αk (n)12 = 0; (4.3)  αk (n)11 − αk (n)22 = 2(n + N ) − 1; (4.4) δk αk (n)12 = −2hn (n + N ). (4.5) k=1 4N−2 X k=1 4N−2 X k=1 Proof. (i) The defining equation (4.1) for An (z) may be written more explicitly as " = An (z) " p′n (z) p′n−1 (z) hn−1 − R 1 − hn−1 pn (t)w(t)dt z−t S R pn−1 (t)w(t)dt 1 hn−1 S z−t # R pn (z) pn−1 (z) hn−1 pn (t)w(t)dt (z−t)2 R pn−1 (t)w(t)dt (z−t)2 S S + " 0 0 # w ′ (z) R pn−1 (t)w(t)dt w(z) S z−t R pn−1 (t)w(t)dt w ′ (z) hn−1 w(z) S z−t # . (4.6) By considering the entries, we see that An (z) is a proper rational function with possible simple poles at the δj , as in (4.2). Hence we have a Laurent expansion 4N−2 4N−2 1 1 X 1 X αk (n) + 2 δk αk (n) + O 3 An (z) = z z z k=1 k=1 (z → ∞). (4.7) (ii) First we compute the (1, 2) entry of An (z), namely An (z)12 Z Z pn (t)w(t)dt pn (t)w(t)dt w′ (z) pn (t)w(t)dt = − pn (z) − pn (z) 2 z−t (z − t) w(z) z−t S S Z S Z n (n + 1)pn (z) =− 2 tn pn (t)w(t)dt − tn pn (t)w(t) dt z S z n+2 S Z 1 w′ (z) pn (z) tn pn (t)w(t)dt + O 3 (4.8) − w(z) z n+1 S z −p′n (z) Z and we can reduce these terms to An (z)12 1 −hn n hn (n + 1) hn (2N − 1) = − − +O 3 , z2 z2 z2 z which gives (4.3) and (4.5). 9 (4.9) Next, the (2, 2) entry of An (z) is Z Z p′n−1 (z) pn (t)w(t)dt pn (z) pn−1 (t)w(t)dt − An (z)22 = − hn−1 z−t hn−1 S (z − t)2 S Z w′ (z) pn (z) pn−1 (t)w(t)dt − w(z) hn−1 S z−t Z n−2 Z (n − 1)z pn (z) n =− t pn (t)w(t)dt − ntn−1 pn−1 (t)w(t)dt hn−1 z n+1 S hn−1 z n+1 S Z 1 2N − 1 pn (z) 1 n−1 t p (t)w(t)dt + O − n−1 z z n hn−1 S z2   1 1 − n − 2N (4.10) +O 2 = z z Similarly, the (1, 1) entry is Z Z pn−1 (t)w(t)dt pn−1 (z) pn (t)w(t)dt p′n (z) An (z)11 = + hn−1 S z−t hn−1 (z − t)2 S Z pn (t)w(t)dt w′ (z) pn−1 (z) + w(z) hn−1 z−t S Z Z p′n (z) (n + 1)pn−1 (z) n−1 = t pn−1 (t)w(t)dt + tn pn (t)w(t)dt n n+2 hn−1 z S hn−1 z S Z   ′ 1 w (z) pn−1 (z) tn pn (t)w(t)dt + O 2 + n+1 w(z) hn−1 z z S   n 1 = +O 2 (z → ∞). (4.11) z z By comparing the coefficients of 1/z in (4.7) with (4.9), (4.10) and (4.11), we obtain 4N−2 X k=1  n αk (n) = 0  0 , 1 − n − 2N (4.12) which leads to (4.4). Let √ n (z)+qn (z) √ − iπw(z)p w(z) 2πi 2πipn (z) Φn (z) =  √2πipn−1 (z) hn−1 n−1 (z)+qn−1 (z) √ − iπw(z)p w(z)h 2πi n−1  , (4.13) which is a matrix function with entries in C(z)[w]; note that Φn also depends upon the δj . Lemma 4.2 The functions Φn satisfy (i) the basic differential equation dΦn (z) = An (z)Φn (z), dz 10 (4.14) (ii) the deformation equation ∂Φn αj (n) =− Φn (z), ∂δj z − δj (4.15) (iii) and the recurrence relation Φn+1 (z) = Vn (z)Φn (z); (iv) moreover, Φn is invertible since det Φn (z) = 1/w(z). Proof. (i) We can write Φn (z) = Yn (z) √  2πi 0√ , 0 1/(w(z) 2πi) (4.16) and then the property (i) follows from (4.1). (ii) This follows from (i) by standard results in the theory of Fuchsian differential equations as in [14, 16]. (iii) The recurrence relation from Proposition 3.2(i). (iv) Given (iii), this identity follows from Proposition 3.2(ii). Lemma 4.2 states several properties that the Φn satisfy simultaneously, and hence generates several consistency conditions. By taking (i), (ii) and (iii) pairwise, we obtain three Lax pairs, which we state in the following three propositions. Proposition 4.3 The residue matrices satisfy Schlesinger’s equations ∂αk (n) [αj (n), αk (n)] = ∂δj δj − δk and (j 6= k) (4.17) 4N−2 X [αj (n), αk (n)] ∂αj (n) =− . ∂δj δj − δk (4.18) k=1;j6=k Proof. We can express the consistency condition ∂ 2 Φn (z) ∂δj ∂z = ∂ 2 Φn (z) ∂z∂δj as the Lax pair ∂An (z) αj (n) αj (n) αj (n)An (z) − An (z) = − 2 ∂δj z − δj (z − δj ) z − δj (4.19) and then one can simplify the resulting system of differential equations. See [14, 19]. Proposition 4.4 The basic differential equation (4.14) and the recurrence relation in Lemma 4.2 are consistent, so   1 0 An+1 (z)Vn (z) − Vn (z)An (z) = . (4.20) 0 0 11 Proof. The Lax pair associated with the these conditions gives  d d An+1 (z)Φn+1 (z) = Vn (z)Φn (z) . Φn+1 (z) = dz dz (4.21) Proposition 4.5 (i) The deformation equation (4.15) and the recurrence relation in Lemma 4.2 are consistent, so − αj (n + 1) αj (n) ∂Vn (z) Vn (z) + Vn (z) = . z − δj z − δj ∂δj (4.22) (ii) In particular, the (1, 2) entry satisfies ∂ log hn = −h−1 n αj (n)12 . ∂δj (4.23) Proof. (i) This is the Lax pair associated with Lemma 4.2. (ii) By letting z → ∞ in (4.22), we deduce " ∂b     − ∂δn+1 1 0 1 0 j −αj (n + 1) + αj (n) = n 0 0 0 0 − h12 ∂h ∂δ n n − ∂h ∂δj j 0 # (4.24) n which implies that αj (n)12 = − ∂h . ∂δj 5. The tau function We introduce the differential 1-form on C4N−2 \ {diagonals} by Ωn = 4N−2 X j,k=1;j6=k  α (n)α (n)  j k trace dδj . δj − δk (5.1) Theorem 5.1 The Hankel determinant Dn gives the tau function, so Ωn = d log Dn . (5.2) Proof. By Proposition 4.3 and results of Jimbo et al [20], Ωn is an exact differential form, so dΩn = 0; hence there exists a function τn such that d log τn = Ωn , and so we proceed to identify τn . By Lemma 3.1(iii), we have log hn = log Dn+1 /Dn , hence we consider Ωn+1 − Ωn = 4N−2 X j6=k:j,k=1  α (n + 1)α (n + 1) − α (n)α (n)  j k j k dδj trace δj − δk 12 (5.3) where by Proposition 4.5(i) αj (n + 1) = Vn (δj )αj (n)Vn (δj )−1 so trace αj (n + 1)αk (n + 1) − αj (n)αk (n)   = trace αj (n)Vn (δj )−1 Vn (δk )αk (n)Vn (δk )−1 Vn (δj ) − αj (n)αk (n) . We have Vn (δj ) −1 Vn (δk ) =  1 δj −δk hn 0 1  (5.4) (5.5) so by direct calculation Ωn+1 − Ωn = 4N−2 X n j6=k:j,k=1 h−1 n αj (n)12 αk (n)11 − αk (n)22   + h−1 α (n) α (n) − α (n) k 12 j 22 j 11 n o −2 − hn (δj − δk )αj (n)12 αk (n)12 dδj (5.6) In this sum we have taken j 6= k, but the expression is unchanged if we include the corresponding terms for j = k; hence the coefficient of dδj is αj (n)12 4N−2 X h−1 n (αk (n)11 k=1 − − αk (n)22 ) − (αj (n)11 − αj (n)22 ) 4N−2 X 4N−2 4N−2 δj αj (n)12 X αj (n)12 X α (n) + δk αk (n)12 . k 12 h2n h2n k=1 hn αk (n)12 k=1 (5.7) k=1 We use Lemma 4.1 to reduce this to −h−1 n αj (n)12 , so Ωn+1 − Ωn = − = 4N−2 X j=1 4N−2 X j=1 Hence Ωn = d Pn−1 k=0 h−1 n αj (n)12 dδj ∂ log hn dδj . ∂δj (5.8) log hk . Following [19], we interpret (5.1) in terms of integrable systems and Hamiltonian mechanics. Let M = M2 (R)4N−2 be the product space of matrices, and let G = GL2 (R) act on M by conjugating each matrix in the list (X1 , . . . , Xn ) 7→ (U X1 U −1 , . . . , U Xn U −1 ). 13 The Lie algebra g of G has dual g∗ , and for each ξ ∈ g∗ the symplectic structure at ξ on g × g is given by ωξ (X, Y ) = ξ([X, Y ]). Given A(z) = 2N−2 X k=1 αk z − δk (5.9) as in (4.2), we introduce ω(X, Y ) = 2N−2 X trace αk [Xk , Yk ] k=1  (5.10) for X = (Xk )2N−2 and Y = (Yk )2N−2 in g2N−2 . Given f, g : M → C, their Poisson k=1 k=1 bracket is {f, g} = Xf (g), and the corresponding vector field satisfies X{f,g} = [Xf , Xg ]. The spectral curve of A(z) is the algebraic variety n o  ΣA = (z, w) ∈ C2 : det wI − A(z) = 0 . (5.11) As suggested by (5.1), we introduce the Hamiltonian Hj : M → C by Hj = X k:k6=j so that ∂ ∂δj  α (n)α (n)  j k trace δj − δk (5.12) log τ (δ) = Hj . We observe that Hj is a polynomial in the entries of αj (n) and αk (n), and is a rational function of δj and δk . To lighten the notation, we temporarily suppress the variable n. (k) Proposition 5.2 (i) The Hamiltonian Hj gives a vector field (XHj )2N−2 k=1 which is associated with the differential equation αj i dA h . = A, dt z − δj (5.13) (ii) The Poisson brackets of the flows commute, so that {Hj , Hk } = 0. (iii) Under this flow, the spectral curve of A is invariant. Proof. (i) For each Y = (Yk )2N−2 k=1 , we introduce a flow on M by α̇k = [Yk , αk ]. We can differentiate Hj in the direction of Y and obtain Y (Hj ) = X trace ([Yk , αk ]αj ) trace (αk [Yk , αj ]) + δj − δk δj − δk k:k6=j 14 (5.14) (k) With Hk , we associate the Hamiltonian vector field XHj = (XHj )2N−2 such that k=1 Y (Hj ) = ω(XHj , Y ) = 2N−2 X k=1  (k) trace αk [(XHj , Yk ] . (5.15) We deduce that (k) αj (k 6= j) δj − δk X αk = . δj − δk XHj = (5.16) (j) (5.17) XHj k:k6=j (k) It is then a simple calculation to check that α̇k = [XHj , αk ] extends to give (5.13) for (d/dt)A(z). (j) (j) (ii) Given the vector fields (XHk )j corresponding to Hk and (XHℓ )j corresponding to Hℓ from (5.12), one can compute {Hk , Hℓ } = X (j) (j)  trace [XHk , Aj ]XHℓ j (5.18) and reduce the expression to zero by an elementary calculation. (iii) One can check that for each positive integer m, the d m dt traceA(z) = 0, and hence det(wI − A(z)) is invariant under the flow. 6. Orthogonal polynomials on a single interval In this section we consider the Chebyshev polynomials. • First suppose that v = 0. Then the corresponding equilibrium distribution is the Chebyshev distribution (1/π)(1 −x2 )−1/2 . In this case, the orthogonal polynomials are the Chebyshev polynomials of the first kind and, unsurprisingly, our results reduce to those of Chen and Its [8]. • For a < b, let v(z) =  a + b 2 8 , z − (b − a)2 2 (6.1) so, by standard results used in random matrix theory [26], the equilibrium measure is the semicircular law on [a, b], as given by ρ(dx) = p 8 (b − x)(x − a) I[a,b] (x)dx π(b − a)2 15 (6.2) Proposition 6.1 The tau function for the semicircular distribution is 2 τ (a, b) = (a − b)(2n +2n+1)/4 n(n+2)(a−b)2 /32 e . (6.3) Proof. Let Un be the Chebyshev polynomial of the second kind of degree n, which satisfies Un (cos θ) = sin(n + 1)θ , sin θ and let −2n n (6.4)  2x − (a + b)  (6.5) b−a which is monic and of degree n, and the pn are orthogonal with respect to the measure pn (x) = 2 (b − a) Un ρ. By elementary calculations involving trigonometric functions, one can show that hn = 2−4n (b − a)2n and   1 n(x − (a + b)/2) −(n + 1)(b − a)2 hn−1 /8 An (x) = , (6.6) (x − b)(x − a) n(b − a)2 /(2hn−1 ) −(n + 1)(x − (a + b)/2) which has poles at a and b, as expected. One verifies that Ωn =  n2 + (n + 1)2  da − db 4 a−b + n(n + 2) (a − b)(da − db) 16 (6.7) so that (6.3) follows by integration. 7. Painlevé equations for pairs of intervals Akhiezer considered a generalization of the Chebyshev polynomials to the pair of intervals [−1, α]∪[β, 1], and investigated their properties by conformal mapping. Chen and Lawrence [9] used the theory of elliptic functions to investigate these polynomials and in (8.18) expressed the Hankel determinant in terms of Jacobi’s elliptic theta functions. In this section we obtain the differential equation where S is two intervals, and obtain a differential equation for the endpoints that is related to the one from [9]. Let v be a polynomial of degree 2N ≥ 4 such that S = [δ1 , δ2 ] ∪ [δ3 , δ4 ]. There exists a Möbius transformation ϕ such that ϕ(δ1 ) = 0, ϕ(δ2 ) = 1 and ϕ(δ4 ) = ∞; then we let t = ϕ(δ3 ). Having fixed three of the endpoints, we can introduce the differential equations from section 4 that describe the effect of varying the endpoint t, namely and α α1 αt  d 0 Φ Φ(x) = + + dx x x−1 x−t (7.1) −αt ∂Φ = Φ. ∂t x−t (7.2) 16 Let A(x, t) be the matrix (α0 /x + α1 /(x − 1) + αt /(x − t)) and let A(x, t)12 be its top right entry. Then we introduce x = λ(t) such that A(x, t)12 = 0; then by [18, p. 1333], the corresponding Schlesinger equations give a version of the nonlinear Painlevé equation PV I in terms of λ, namely d2 λ  1 1 1  dλ 1  1 1 1  dλ 2 + + + − + + dt2 t t − 1 λ − t dt 2 λ λ−1 λ−t dt k0 t k1 (t − 1) (kt − 1)t(t − 1)  1 λ(λ − 1)(λ − t)  k∞ − 2 + − . = 2 t2 (t − 1)2 λ (λ − 1)2 (λ − t)2 (7.3) The Hamiltonian and tau function satisfy α α d α1 αt  0 t Ht = . log τ = trace + dt t t−1 (7.4) Having transformed S to [0, 1] ∪ [t, ∞] we can lift this to the portions of the real axis that are covered by the elliptic curve E = {(λ, w) : w2 = 4λ(λ − 1)(λ − t)} which has parameters λ = P(u/2) and w = P ′ (u/2) in terms of Weierstrass’s function P with e1 = t, e2 = 1 and e3 = 0. Hence we transform to the dependent variables u= Z λ ds p 0 s(s − 1)(s − t) , (7.5) and make the substitution u = u(λ(t), t). Fuchs [16] observed that d2 u 2t − 1 du u + + 2 dt t(t − 1) dt 4t(t − 1) p λ(λ − 1)(λ − t) h k0 t k1 (t − 1) kt t(t − 1) i = k − + − . ∞ 2t2 (t − 1)2 λ2 (λ − 1)2 (λ − t)2 (7.6) To solve this in a special case, we introduce the complete elliptic integral K(m) = Z 0 π/2 dθ p . 1 − m2 sin2 θ By comparing terms on power series, one can recover the following result. Proposition 7.1 (Poincaré) Suppose that k0 = k1 = kt = k∞ = 0 and that u(t) = √ √ c1 K( t) + c2 K( 1 − t) for constants c1 and c2 . Then u satisfies Legendre’s equation t(t − 1) du u d2 u + (2t − 1) + = 0, 2 dt dt 4 so λ(t) = P(u(t)/2; 0, 1, t) gives a solution of PV I . 17 (7.7) 8. Kernels associated with Schlesinger’s equations In this section, we introduce kernels that are associated with Schlesinger’s equations, and then factorize them in terms of Hankel operators. First we let νj = −2−1 trace αj (n) and observe that νj does not depend upon n. Indeed, by multiplying (4.22) by Vn−1 , one deduces that trace An+1 (z) = trace An (z), and since trace αj (n) = limz→δj (z − δj )trace An (z), we deduce that trace αj (n) is constant with respect to n. By (4.12), we P4N−2 have j=1 trace αj (n) = 1 − 2N. Now, given Φn as in (4.13), let Ψn (z) = 4N−2 Y j=1 (z − δj )νj Φn (z). (8.1) We next introduce the matrix valued kernel Mn (z, ζ) = Ψn (z)† JΨn (ζ) ; −2πi(z − ζ) (8.2) we aim to show that Mn is positive definite as an integral operator on L2 (S), and we observe that this property does not change if we introduce weights on S. Proposition 8.1 Let En (z, ζ) be the kernel of the orthogonal projection onto span{xj : j = 0, . . . , n − 1} in L2 (ρ). Then the top left entry of Mn (z, ζ) equals Mn (z, ζ)11 4N−2 4N−2 Y hn Y νj (ζ − δj )νj En (z, ζ). (z − δj ) = hn−1 j=1 j=1 (8.3) Proof. The Christoffel–Darboux formula gives En (z, ζ) = pn (z)pn−1 (ζ) − pn−1 (z)pn (ζ) . hn (z − ζ) (8.4) One can find Ψn (z)† JΨn (ζ) by direct calculation, and compare with this. Let βj (n) = αj (n) + νj I2 , which has zero trace. Furthermore, if Φn is a solution of the basic differential equation (4.14), then d Ψn (z) = Bn (z)Ψn (z) dz where Bn (z) = 4N−2 X j=1 18 βj (n) . z − δj (8.5) (8.6) We pause to note an existence result for solutions of the matrix system (8.5). Lemma 8.2 Suppose that βj (n) has eigenvalues ±κj (n) where 2κj (n) is not an integer. Then on a neighbourhood of δj , there exists an analytic matrix function Ξn,j such that Ψn (z) = Ξn,j (z)(z − δj )βj (n) (8.7) satisfies (8.5). Proof. This follows from Turrittin’s theorem; see [3]. For notational simplicity, we consider the interval (δ1 , δ2 ) and assume that δ1 = 0 and 1 < δ2 ; the general case follows by scaling. For a continuous function φ : (0, 1) → R8N−6 , the Hankel operator Γφ : L2 ((0, 1); dy/y; R) → L2 ((0, 1); dy/y; R8N−6) is given by Γφ f (x) = 1 Z φ(xy)f (y) 0 dy . y (8.8) Since βk (n) has zero trace, the matrix (δ1 − δk )Jβk (n) is real symmetric and hence is congruent to either   1 0 σk = ± , 0 1   1 0 ± , 0 0   1 0 , 0 −1   0 0 ; 0 0 (8.9) let σ = diagonal[σk ]4N−2 be the block diagonal sum of these matrices. k=2 Theorem 8.3 (i) Let β1 (n) be as in Lemma 8.2. Then there exists Zn , a 2 × 1 real vector solution of (8.5) such that Zn (x) → 0 as x → δ1 . (ii) The integral operator on L2 ((0, 1); dx/x) with kernel Kn (z, ζ) = √ zζZn (ζ)† JZn (z) z−ζ (8.10) is of trace class; moreover, there exists a real vector Hankel operator Γψn on L2 ((0, 1), dy/y) such that Kn = Γ†ψn σΓψn . (8.11) (iii) If σ ≥ 0, then Kn ≥ 0. Proof. (i) There exists an invertible constant 2 × 2 matrix Sn such that Sn z β1 (n) Sn−1 z κ1 (n) = 0  19 0 z −κ1 (n)  . (8.12) where κ1 (n) > 0. Hence by Lemma 8.2, there exists a constant 2 × 1 matrix C such that Zn (z) = Ψn (z)C is a solution of (8.5), and Zn (z) = O(|z − δ1 |κ1 (n) ) as z → δ1 . (ii) Hence we can introduce Kn by (8.10), and next we prove that the kernel satisfies √ 4N−2  ∂ X −δk xy ∂  Kn (x, y) = +y Zn (y)† Jβk (n)Zn (x). x ∂x ∂y (x − δk )(y − δk ) (8.13) k=2 ∂ ∂ √ First note that by homogeneity (x ∂x + y ∂y ) xy/(x − y) = 0. Since the βk (n) have zero † trace, we have Jβk (n) + βk (n) J = 0 and hence the differential equation gives  ∂ ∂  Zn (y)† JZn (x) +y x ∂x ∂y = Zn (y)† Bn (y)† JZn (x) + Zn (y)† JBn (x)Zn (x) 4N−2  x X √ y  † = ; − xyZn (y) Jβk (n)Zn (x) x − δk y − δk (8.14) k=2 on dividing by x − y, we obtain √ √ 4N−2  ∂ X δk xy ∂  xyZn (y)† JZn (x) x +y = Zn (y)† Jβk (n)Zn (x) ∂x ∂y x−y (x − δk )(y − δk ) (8.15) k=2 as in (8.13). Noting the shape of the final factor in (8.12), we choose h √xZ (x) i n φn (x) = column x − δk k=2,...,4N−2 (8.16) which has a 2 × 1 entry for each endpoint δk of S after δ1 , and the block diagonal matrix h i β(n) = diagonal −δk Jβk (n) k=2,...,4N−2 (8.17) with 2 × 2 blocks, and we consider K̃n (x, y) = Z 1 φn (yz)† β(n)φn (zx) 0 dz . z (8.18) First note that since κ1 (n) > 0, we have K̃(x, y) → 0 as x, y → 0. Then Z 1  ∂  ∂  K̃n (x, y) = yφ′n (yz)† β(n)φn (zx) + xφn (yz)† β(n)φ′n (xy) dz +y x ∂x ∂y 0 = φn (y)† β(n)φn (x) − φn (0)† β(n)φn (0). 20 (8.19) We have φn (0) = 0, so Kn (x, y) = K̃n (x, y) + ξ(x/y) (8.20) for some function ξ. But Zn (z)/(z − δ1 )κ1 (n) is analytic on a neighbourhood of δ1 , so it is clear that Kn (x, y) → 0 and K̃n (x, y) → 0 as x → 0 or y → 0; hence ξ = 0. By the choice of σ, there exists a block diagonal matrix γ(n) such that γ(n)† σγ(n) = β(n), so we can introduce ψn (x) = γ(n)φn (x) such that φn (x)† β(n)φn (y) = ψn (x)† σψn (y). For this symbol function ψn we have Kn (x, y) = Z 1 ψn (yz)† σψn (zx) 0 dz , z (8.21) or in terms of Hankel operators Kn = Γ†ψn σΓψn . We have Z 1 log 0 1 du kψn (u)k2 < ∞, u u so Γψ is Hilbert–Schmidt and hence Kn is trace class. (iii) If σk ≥ 0 for all k or equivalently σ ≥ 0, then Kn ≥ 0. Corollary 8.4 Suppose that Z is a 2 × 1 solution of β β1 βt  d 0 Z Z= + + dx x x−1 x−t (8.22) such that the entries satisfy Z(x̄) = Z̄(x) and where (i) β0 is as in Lemma 8.2, and Z(x) → 0 as x → 0; (ii) Jβ1 is positive definite; (iii) Jβt ≥ 0. Then there exist an invertible real matrix S and a real diagonal matrix D such that " √ # ψ(x) = xSZ(x) √ x−1 xDSZ(x) x−t (8.23) satisfies ψ(x̄) = ψ(x) and √ xyZ(y)† JZ(x) = y−x Z 1 ψ(xz)† ψ(zy) 0 dz . z (8.24) Proof. We simultaneously reduce the quadratic forms associated with Jβ1 and Jβt , and introduce an invertible real matrix S such that Jβ1 = SS † and Jβt = SD2 S † , where D 21 is a real diagonal matrix such that the diagonal entries κ of D2 satisfy det(β1 − κβt ) = 0. Then we can write   Jβt t Jβ1 √ Z(x). (8.25) ψ(y)† ψ(x) = xyZ(y)† + (x − 1)(y − 1) (x − t)(y − t) Now we can follow the proof of Theorem 8.3, and deduce that √  ∂ ∂  xyZ(y)† JZ(x) † +y ; −ψ(y) ψ(x) = x ∂x ∂y x−y (8.26) hence we can obtain the result by integrating and using (i). Example 8.5 For the semicircle law on [a, b], as in Proposition 6.1, we have # " n(b−a) 2hn−1 2n+1 4 Jβa (n) = and Jβb (n) = " −n(b−a) 2hn−1 2n+1 4 2n+1 4 (n+1)(b−a)hn−1 8 2n+1 4 −(n+1)(b−a)hn−1 8 # , (8.27) (8.28) so that n(n + 1)(b − a)2 (2n + 1)2 − . (8.29) 16 16 In particular, when a = −1 and b = 1, the matrices Jβ−1 (n) and Jβ1 (n) are indefinite. det Jβa (n) = det Jβb (n) = 9. The tau function realised by a linear system In this section, we express the tau function of Kn from Theorem 8.3 as a Fredholm determinant, and then obtain this from the solution of an integral equation of Gelfand– Levitan type. The first step is introduce a scattering function ψ and then to realise this by a linear system, so that we can solve the Gelfand–Levitan equation. The differential equation dZn = Bn (x)Zn (x) dx has a solution from which we constructed a symbol function h √xγ(n)Z(x) i4N−2 ψn (x) = column . x − δk k=2 (9.1) (9.2) Suppressing n for simplicity, we change x ∈ (0, 1) to t ∈ (0, ∞) by letting x = δ1 + e−t and in the new variables write ψ(t) = ∞ X χℓ e−(κ1 +ℓ+1/2)t . ℓ=0 22 (9.3) where P∞ ℓ=0 kχℓ k < ∞. Likewise, we write τ (t) for τ (δ1 + e−t ). Let Ω = {z : ℜz ≥ 0} be the open right half-plane, let  0 Ψ(x) = ψ(x̄)† ψ(x) 0  (9.4) and extend Ψ to an analytic function Ψ : Ω → M8N−5 (C) such that Ψ(x) = Ψ(x)† for x > 0. Let Ψ(s) = Ψ(x + 2s) and Ψ∗(s) (x) = Ψ(x + 2s̄)† and let σ be a constant matrix; then let Ks = ΓΨ∗(s) σΓΨ(s) be a family of operators on L2 (0, ∞). Proposition 9.1 (i) The τ function associated with K = ΓΨ∗ σΓΨ is τ (2s) = det(I − Ks ), which gives an analytic function on Ω. 2 R∞ 0 d (ii) Let q(s) = −2 ds 2 log τ (2s). Then q(s) is meromorphic on Ω, and analytic where xkΨ(x + s)k2 dx < 1. (iii) If 0 ≤ K ≤ I, then τ (s) is non-negative for 0 < s < ∞, increasing and converges to one as s → ∞. Proof. (i) The kernel of the Hankel operator ΓΨ(s) has a nuclear expansion ΓΨ(s) ↔ where P∞ ℓ=0 kχℓ k R∞ 0 ∞ X −(κ1 +ℓ+1/2)(x+y+2s) e ℓ=0  0 χ†ℓ χℓ 0  (9.5) e−2(κ1 +ℓ+1/2)(x+ℜs) dx < ∞, so the Fredholm determinants are well defined. As in Schwarz’s reflection principle, s 7→ Ψ∗(s) is analytic, and ΓΨ(s) is Hilbert– Schmidt, so Ks is an analytic trace-class valued function on Ω. Using unitary equivalence, one checks that det(I − Ks ) = det(I − P(2s,∞) K) (s > 0). (9.6) (ii) Except on the discrete set of zeros of τ (2s), the operator I − Ks is invertible and q(s) = 2  dKs  d . trace (I − Ks )−1 ds ds (9.7) (iii) This follows from (9.6). Next, we obtain an alternative formula for q by realising Ψ via a linear system. The technique is suggested by the inverse scattering transform. Let H0 = C8N−6 be the column vectors, H = ℓ2 be Hilbert sequence space, written as infinite columns, and introduce an 2 infinite row of column vectors C ∈ ℓ2 (H0 ) by C = (χℓ /kχℓ k1/2 )∞ ℓ=0 and a column B ∈ ℓ ∞ by B = (kχℓ k1/2 )∞ ℓ=0 and the infinite square matrix A = diagonal [ℓ + κ1 + 1/2]ℓ=0 . While 23 A is real and symmetric, we will write A† in some subsequent formulas, so as to emphasize their symmetry. In the following result we use the (8N − 5) × (8N − 5) block matrices     U (x, y) v(x, y) 0 ψ(x) W (x, y) = , Ψ(x) = , w(x, y)† z(x, y) ψ(x̄)† 0 so that Ψ(x̄) = Ψ(x)† and the matrix Hamiltonian   U (x, x)σ v(x, x) H(x) = w(x, x)† σ z(x, x) (9.8) (9.9) where v, w ∈ H0 , U operates upon H0 and z is a scalar. To simplify the statements of results, we use a special non-associative product ∗, involving σ, that is defined by      †  U v 0 ψ vψ U σψ ∗ = . (9.10) w† z ψ† 0 zψ † w† σψ Theorem 9.2 (i) The symbol ψ is realised by the linear system (−A, B, C), so ψ(t) = Ce−tA B. (9.11) (ii) There exists a solution of the Gelfand–Levitan equation Z ∞ W (x, y) + Ψ(x + y) + W (x, s) ∗ Ψ(s + y) ds = 0 (0 < x < y) (9.12) x such that the tau function of Proposition 9.1(i) satisfies d log τ (2x) = trace H(x) (x > 0). dx R∞ (iii) Suppose moreover that 0 xkΨ(x)k2 dx < 1. Then  ∂2 dH ∂2  − 2 W (x, y) = −2 W (x, y). 2 ∂x ∂y dx (9.13) (9.14) Proof. (i) This identity follows from (9.3). Since κ1 + ℓ + 1/2 > 0, the semigroup e−tA = diagonal [e−t(κ1 +ℓ+1/2) ]∞ ℓ=0 consists of trace class operators, and the integrals in the remainder of the proof are convergent. (ii) We introduce the observability Gramian Z ∞ † σ Qx = e−sA C † σCe−sA ds x 24 (x > 0), (9.15) modified to take account of σ, and the usual controllability Gramian Z ∞ † e−sA BB † e−sA ds, Lx = (9.16) x both of which define trace class operators on ℓ2 , and where Lx ≥ 0. The controllability operator Ξx : L2 ((0, ∞); H0) → H is Z ∞ e−tA Bf (s) ds (9.17) Ξx f = x while the observability operator is Θx : L2 ((0, ∞); H0) → H is Z ∞ † Θx f = e−sA C † f (s) ds. (9.18) x Finally, we let ψ(x) (s) = ψ(s + 2x), so that ψ(x) is realised by (−A, e−xA B, Ce−xA ). In terms of these operators, we have the basic identities Γ†ψ(x) = Ξ†x Θx Γψ(x) = Θ†x Ξx , (9.19) while Lx = Ξx Ξ†x and Qσx = Θx σΘ†x . (9.20) Hence we can rearrange the factors in the Fredholm determinants det(I − λΓ†ψ(x) σΓψ(x) ) = det(I − λΞ†x Θx σΘ†x Ξx ) = det(I − λΞx Ξ†x Θx σΘ†x ) = det(I − λLx Qσx ). (9.21) We deduce that log τ (2x) = log det(I − Γ†ψ σΓψ P(2x,∞) ) = log det(I − σΓψ P(2x,∞) Γ†ψ ) = log det(I − σΓψ(x) Γ†ψ(x) ) = log det(I − Γ†ψ(x) σΓψ(x) ) = trace log(I − Lx Qσx ), (9.22) and hence   † † d log τ (2x) = trace (I − Lx Qσx )−1 e−xA BB † e−xA Qσx + Lx e−xA C † σCe−xA dx † = B † e−xA Qσx (I − Lx Qσx )−1 e−xA B † + trace σCe−xA (I − Lx Qσx )−1 Lx e−xA C † . 25 (9.23) The integral equation     U (x, y) v(x, y) 0 ψ(x + y) + w(x, y)† z(x, y) ψ(x + y)† 0    Z ∞ U (x, s) v(x, s) 0 ψ(s + y) + ∗ ds = 0 w(x, s)† z(x, s) ψ(s + y)† 0 x (9.24) reduces to the identities U (x, y) = − z(x, y) = − ∞ Z Zx∞ v(x, s)ψ(s + y)† ds, w(x, s)† σψ(s + y) ds, (9.25) v(x, t)ψ(t + s)† σψ(s + y) dsdt = 0 (9.26) ψ(s + y)ψ(t + s)† σw(x, t) dtds = 0. (9.27) x and the pair of integral equations v(x, y) + ψ(x + y) − Z w(x, y) + ψ(x + y) − Z and ∞ x ∞ x Z ∞ Z ∞ x x To solve these integral equations, we let v(x, y) = −Ce−xA (I − Lx Qσx )−1 e−yA B (9.28) w(x, y) = −Ce−yA (I − Lx Qσx )−1 e−xA B; (9.29) and then by substituting these into (9.25) we obtain the diagonal blocks of the solution W , namely † U (x, y) = Ce−xA (I − Lx Qσx )−1 Lx e−yA C † (9.30) and † z(x, y) = B † e−yA Qσx (I − Lx Qσx )−1 e−xA B. (9.31) Hence we can identify the trace of the solution (9.9) as trace H(x) = trace σU (x, x) + z(x, x) † = trace σCe−xA (I − Lx Qσx )−1 e−xA C † † + B † e−xA Qσx (I − Lx Qσx )−1 e−xA B d log τ (2x). = dx 26 (9.32) (iii) By integrating by parts, we obtain the identity Z ∞ 2  ∂2 ∂ ∂2  dH ∂2  − W (x, y) − 2 Ψ(x + y) + − W (x, s) ∗ Ψ(s + y) ds = 0 (9.33) ∂x2 ∂y 2 dx ∂x2 ∂s2 x for 0 < x < y. One can easily verify that the product ∗ and the standard matrix multiplication satisfy (QW ) ∗ Ψ = Q(W ∗ Ψ), hence the formula Z ∞  dH dH dH −2 W (x, y) − 2 Ψ(x + y) − 2 W (x, s) ∗ Ψ(s + y) ds = 0 dx dx dx x (9.34) , and this shows that both −2 dH W (x, y) and follows from multiplying (9.33) by −2 dH dx dx 2 2 ∂ ∂ ( ∂x 2 − ∂y 2 )W (x, y) are solutions of the same integral equation. By uniqueness of solutions, they are equal. 10. Integrability of the tau function of a linear system d Let q(x) = −2 dx traceH(x) and τ be as in (9.32). In this section we describe the properties of τ in terms of the algebraic theory of differential equations [30]. Let F be a field (of complex functions) with differential ∂ that contains the subfield C of constants and adjoin an element h to form F(h), where either: R (i) h = g for some g ∈ F, so ∂h = g; R (ii) h = exp g for some g ∈ F; or (iii) h is algebraic over F. Definition. Let Fj (j = 1, . . . , n) be fields with differential ∂ that contain the subfield C of constants and suppose that F1 ⊆ F2 ⊆ . . . ⊆ Fn , (10.1) where Fj arises from Fj−1 by applying some operation (i), (ii) or (iii). Then Fn is a Liouvillian extension of F1 . Example. The tau function (6.3) belongs to some Liouvillian extension of C(x). Lemma 10.1 Let An : Cn → Cn , Bn : Cm → Cn and Cn : Cn → Ck be finite matrices, such that M An + A†n M is positive definite for some positive definite M . Let ψn (t) = Ce−tAn B, and define the corresponding terms as in the proof of Theorem 9.2. Then τn (x) belongs to some Liouvillian extension field F of F0 = C(t, e−tκj , e−tκ̄j ; j = 1, . . . , n), where (κj )nj=1 is the spectrum of An . Proof. By Lyapunov’s criterion [7], all of the eigenvalues κj of An satisfy ℜκj > 0, hence ke−tAn k is of exponential decay as t → ∞. By considering the Jordan canonical form 27 Pn of An , we obtain matrix polynomials pj (t) such that e−tAn = j=1 pj (t)e−κj t . Observe † † that F0 contains all the entries of e−tAn Bn Bn† e−tAn and e−tAn Cn† σCn e−tAn . The operator † Lx is an indefinite integral of e−tAn Bn Bn† e−tAn while the operator Qσx is an indefinite † integral of e−tAn Cn† σCn e−tAn , hence Lx and Qσx have entries in F0 ; moreover, the entries of (I − Lx Qσx )−1 are quotients of determinants with elements in F0 . Hence by (9.32), d dx log τn (2x) gives an element of F0 , so τn (x) itself is in a Liouvillian extension F of F0 . Theorem 10.2 Let ψ be as in Theorem 9.2. (i) There exists a sequence of finite rank matrices (An )∞ n=1 , with corresponding tau d log τn (2t) belongs to C(e−(κ1 +1/2)t , e−t ) and τn (2t) → τ (2t) as functions τn , such that dt n → ∞, uniformly on compact subsets of {t : ℜt > 0}. (ii) Suppose further that κ1 is rational. Then there exists a positive integer N such d that dt log τ (2t) is periodic with period 2πiN , and τn (2t) is given by elementary functions as in (10.4) below. Proof. (i) We introduce the finite rank matrices An = diagonal [κ1 + 1/2, κ1 + 3/2, . . . , κ1 + n + 1/2, 0, 0, . . .] (10.2) so that ke−tA − e−tAn kc1 ≤ e−(κ1 +n+1)ℜt /(1 − e−ℜt ). Similarly, we cut down B and C and follow through the proof of Theorem 9.2 to produce the appropriate choice of Wn (t, t) by the prescription of (9.8). Evidently, the eigenvalues of e−tAn have the form e−t(κ1 +ℓ+1/2) where ℓ = 0, . . . , n. We observe that e−t(κ1 +ℓ+1/2) belongs to C(e−(κ1 +1/2)t , e−t ) for all ℓ, so Wn (t, t) likewise belongs to C(e−(κ1 +1/2)t , e−t ). (ii) In this case, the set {mκ1 + m/2 + nℓ; m, n ∈ Z; ℓ = 0, 1, 2, . . .} is a finitely generated subgroup of the rationals, and hence has a smallest positive element M/N , where M, N ∈ N with M < N . Then all the terms N (κ1 + ℓ + 1/2) are positive integers, so exp(−(t + 2πN i)A) = exp(−tA) for all ℜt > 0, hence τ ′ (2t)/τ (2t) is periodic. By Lemma 10.1, there exists a rational function rn such that d log τn (2t) = rn (e−t/N ). (10.3) dt Suppose for simplicity that rn (z)/z has only simple poles; then from the partial fractions decomposition, there exist coefficients αj , βj and γj and bj , cj such that b2j < cj , real poles ak and a polynomial qn (z) such that (10.3) integrates to X X log τn (2t) = qn (e−t/N ) + αk log |e−t/N − ak | + βj log(e−2t/N + 2bj e−t/N + cj ) j k + X j e−t/N + bj γ q j tan−1 q . cj − b2j cj − b2j 28 (10.4) When rn (z)/z has higher order poles, one likewise obtains expressions that are similar but more complicated. In view of Theorem 10.2(ii) and the results of [8], it is natural to consider how the properties of τ relate to integrability of Schrödinger’s equation −f ′′ + qf = λf . Definition. Let q be meromorphic on C. As in [6,17], we say that q is algebro-geometric if there exists a non-zero R : C2 → C ∪ {∞} such that x 7→ R(x; λ) is meromorphic, λ 7→ R(x; λ) is a polynomial, and −R′′′ + 4(q − λ)R′ + 2q ′ R = 0. (10.5) Drach [6] observed that Schrödinger’s equation is integrable by quadratures for all λ only if q is algebro geometric. Conversely, if R(x; λ) is as above, then  Z p f (x) = R(x; λ) exp − dt  R(t; λ) (10.6) gives a solution to Schrödinger’s equation. In [6], Brezhnev catalogues several known special functions which give integrable forms of Schrödinger’s equation. The theory extends to meromorphic potentials on compact Riemann surfaces by [13, p. 235] and [23, p. 1122]. The following result of Gesztesy and Weikard summarizes various sufficient conditions for a potential to be algebro-geometric. The initial hypothesis rules out variants of Bessel’s equation. Theorem 10.3 [17] Suppose that −f ′′ +qf = λf has a meromorphic fundamental solution for each λ and that either (i) q is rational and bounded at infinity; (ii) q is elliptic, that is, doubly periodic; or (iii) q is periodic, with purely imaginary period, and q is bounded on {z ∈ C : |ℜz| > r} for some r > 0. Then q is algebro-geometric. We proceed to consider the cases (i),(ii) and (iii) of Theorem 10.3, the linear systems (−A, B, C) that give rise to them, and the corresponding τ functions. Proposition 10.4 Suppose that q satisfies Theorem 10.3(i). Then f has a rational Laplace transform and hence is the transfer function of a linear system (−An , B, C) with a finite matrix An . 29 Proof. By a theorem of Halphen [17], the general solution of −f ′′ + qf = λf has the Pn form f (x) = j=1 qj (x)e−κj x , where qj (x) are polynomials. Hence there exist constants PN ak , not all zero, such that k=0 ak f (k) (x) = 0; so by taking the Laplace transform, and introducing the initial conditions, we can recover the rational function fˆ(s); see [7, p.15]. We recall that any proper rational function arises as the transfer function of a linear system that has a finite matrix An , so fˆ(λ) = C(λI + An )−1 B. Example (iii). In Theorem 10.2 (ii), q is a rational function of e−t/N and under certain conditions gives rise to case (iii) of Theorem 10.3. In particular, q(t) = −2sech2 t is algebro-geometric, has period 2πi, is bounded on {z : |ℜz| > r} for all r > 0 and arises from τ (2t) = 1 + e−2t . This potential appears in the theory of solitons [4]. 11. Realising linear systems for elliptic potentials Suppose that q is real, smooth and periodic with period one; introduce Hill’s operator + q(x) in L2 (R). Then we introduce the Bloch spectrum, which is d2 − dx 2 SB = {λ ∈ C : the general solution of − f ′′ + qf = λf has f ∈ L∞ (R; C)}. (11.1) One can show that when q is an algebro-geometric potential, SB has only finitely many gaps; see [1,6, 24]. So we suppose that SB = [λ0 , λ1 ] ∪ [λ2 , λ3 ] ∪ . . . ∪ [λ2g , ∞) (11.2) with g gaps. The λj are the points of the simple periodic spectrum, such that −f ′′ + qf = λj f has a unique solution, up to scalar multiples, that is one or two periodic. Let Φ be the 2 × 2 fundamental solution matrix that satisfies     d 0 1 1 0 Φ(x) = Φ(x), Φ(0) = , (11.3) −λ + q(x) 0 0 1 dx and let ∆(λ) = trace Φ(1) be the discriminant of Hill’s equation. We can characterize SB as {λ ∈ R : ∆(λ)2 ≤ 4}, and its components are known as the intervals of stability. Q2g The (hyper) elliptic curve C : y 2 = − j=0 (x − λj ) has genus g, and we can form the (hyper) elliptic function field Eg = C(x)[y]. We therefore have a situation quite analogous to (2.5) and the Riemann surface E of section 7. In this section, we consider the case of g = 1, where C is an elliptic curve which is parametrized by P. Hochstadt proved that g = 1 if and only if q(x) = c1 + 2P(x + c2 ) where c1 and c2 are constants; see [17]. Starting finite matrices, we formulate a version of the Gelfand–Levitan equation that is appropriate when φ(x) = Ce−xA B is periodic. (The equation (9.12) does not converge 30 when the functions are periodic.) A variant of this was used in [11] to solve the matrix nonlinear Schrödinger equation. Thus we will realise elliptic tau functions from linear systems. Definition. (Periodic linear system (−A, B, C; E)) Let A, B, C and E be finite square matrices of equal size; let ε = ±1, and suppose that BC = ε(AE + EA), BE = EB, EA = AE and exp 2πA = I. Define φ(x) = Ce−xA B to be the scattering function for (−A, B, C) and then introduce W (x, y) = Ce−xA I − e−xA Ee−xA We define the tau function to be Z τ (x) = exp −1 x trace W (y, y) dy 0 e−yA B. (11.4)  (11.5) 2 d and let q(x) = −2 dx 2 log τ (x) be the potential function. We define τ indirectly so as to accommodate the most significant applications. The definition retains the spirit of Theorem 9.2, on account of the following result. Lemma 11.1 (i) The matrices satisfy the Gelfand–Levitan equation −φ(x +y) +W (x, y) −ε and Z 2π W (x, z)φ(z +y) dz = W (x, y)E (0 < x < y < 2π), (11.6) x d log det(I − e−xA Ee−xA ) = εtrace W (x, x). dx (11.7) (ii) Let F be a differential field that contains all the entries of e−xA . Then τ (x) belongs R 2π to a Liouvillian extension of F, and τ (x + 2π) = κτ (x) where κ = exp 0 trace W (y, y) dy. (iii) Suppose moreover that ε = 1 and 2πkφk∞ < 1. Then  d  ∂ 2W ∂ 2W − = −2 W (x, x) W (x, y). ∂x2 ∂y 2 dx (11.8) Proof. (i) One can check that Z 2π x e−zA BCe−zA dz = εe−xA Ee−xA − εE and it is then a simple matter to verify the integral equation (11.6). 31 (11.9) By rearranging terms, one checks that trace W (x, x) = trace (I − e−xA Ee−xA )−1 e−xA BCe−xA d = ε trace log(I − e−xA Ee−xA ) dx d =ε log det(I − e−xA Ee−xA ). dx  (11.10) (ii) By (i), τ is given by exponential integrals of the entries of e−xA . Note that W (x, y) Rx is periodic in both x and y, so W (x, x) is periodic and hence 0 traceW (y, y) dy increases by the same amount as x increases through any interval of length 2π. (iii) By repeatedly differentiating (11.6), and using periodicity, one derives the identity d  ∂ 2W ∂ 2W ∂W ′ − + 2 W (x, x) φ(x + y) + W (x, 0)φ (y) − (x, 0)φ(y) ∂x2 ∂y 2 dx ∂y − Z 2π  x ∂ 2W ∂ 2W  ∂ 2W ∂ 2W − φ(z + y) dz = E − E ∂x2 ∂y 2 ∂x2 ∂y 2 (11.11) Since ABC − CBA = 0, we obtain W (x, 0)φ′ (y) − ∂W (x, 0)φ(y) = 0, ∂y (11.12) d W (x, x). By the assumpso (11.11) is a multiple of the original integral equation by −2 dx tions on kφk∞ , the solutions are unique, hence the differential equation is satisfied. By introducing infinite block matrices, we can extend the scope of Lemma 11.1. Clearly we can replace ε in (11.6) by a diagonal matrix with blocks of ±1 entries on the diagonal. 2 d One can interpret the following result as saying that Lamé’s operator − dx 2 + 2P has the scattering function proportional to sin x. Let ω1 and ω2 be the periods, so that ω = ω2 /ω1 has ℑω > 0; then let e1 = P(ω1 /2), e2 = P((ω1 + ω2 )/2) and e3 = P(ω2 /2); then let Jacobi’s modulus be m2 = (e2 − e3 )/(e1 − e3 ) and q = eiωπ . To be specific, we choose w1 = 2π and w2 = 2πi. Let A, B and C be the infinite block diagonal matrices with 2 × 2 diagonal blocks  ∞ A = diagonal J n=−∞ ,  ∞ E = diagonal q2|n| I2 n=−∞ , 32 C = A, (11.13) B = 2E. (11.14) Proposition 11.2 (i) The functions φ(x) = Ce−xA B and W (x, y) of (11.4) satisfy the Gelfand–Levitan equation (11.6) and trace φ(x) = 4 1 + q2 sin x; 1 − q2 (11.15) (ii) The corresponding tau function is entire, belongs to a Liouvillian extension of the standard elliptic function field and satisfies 2P(x) = − d2 log τ (x). dx2 (11.16) Proof. (i) The matrices satisfy EB = BE, AE = EA and BC = AE + EA, so Lemma 11.1(i) applies. Note that the entries of E are summable, so E defines a trace class operator, hence the trace exists and a simple calculation gives (11.15). (ii) Observe also that det(I − q 2|n| −2xA e 1 − q2|n| cos 2x −q2|n| sin 2x ) = det q2|n| sin 2x 1 − q2|n| cos 2x  = 1 − 2q2|n| cos 2x + q4|n| , so one has det(I − e−xA Ee−xA ) = 4 sin2 x ∞ Y n=1  (1 − 2q2n cos 2x + q4n )2 ; (11.17) (11.18) for comparison, by [25, p 135] the Jacobi elliptic function satisfies 1/4 θ1 (x) = 2q sin x ∞ Y n=1  1 − 2q2n cos 2x + q4n (1 − q2n ). (11.19) So we have an entire function τ (x) = det(I − e−xA Ee−xA ) = Moreover, we have [25, p. 132] P(x) = − θ1 (x)2 Q . ∞ q1/2 n=1 (1 − q2n )2 d2 d2 log θ (x) + e + log θ1 (x) 1 1 dx2 dx2 x=1/2 , (11.20) (11.21) hence we obtain (11.16). Let E be the elliptic function field of functions of rational character on the complex torus C/(ω1 Z + ω2 Z). Then E = C(P)[P ′ ], and by (11.21) E has a Liouvillian extension Eθ that contains θ1 . 33 Theorem 11.3 Let τ be an elliptic function. Then there exists a periodic linear system (−A, B, C; E), where A, B, C and E are infinite block diagonal matrices with 2 × 2 blocks, such that d log τ (x) = trace W (x, x). (11.22) dx Proof. Any elliptic function is of rational character on C/(ω1 Z + ω2 Z), and is the ratio of theta functions by [25, p 105], so τ (x) = m Y θ(x − aj ) θ(x − bj ) j=1 (11.23) where a1 + . . . + am = b1 + . . . + bm . First we construct a periodic linear system with θ as its tau function. For n = 0, let A0 = J/2, E0 = −iJ, B0 = iI and C0 = I, then (−A0 , B0 , C0 ; E0 ) is a periodic linear system such that det(I − e−xA0 E0 e−xA0 ) = 2i sin x. For n = 1, 2, . . . , let An = Cn = J, En = q2n I and Bn = 2En ; then (−An , Bn , Cn ; En ) is a periodic linear system such that det(I − e−xAn En e−xAn ) = 1 − 2q2n cos 2x + q4n . Hence we can introduce block diagonal matrices A = diagonal[A0 , A1 , . . .] and E = diagonal[E0 , E1 , . . .], and so on to give a periodic linear system (−A, B, C; E) such that −xA det(I − e −xA Ee ) = 2i sin x ∞ Y n=1 = q1/4 (1 − 2q2n cos 2x + q4n ) iθ(x) Q∞ . 2n n=1 (1 − q ) (11.24) Next we replace (−A, B, C; E) by the terms (−A, eaj A B, Ceaj A ; eaj A Eeaj A ) which give Wj by (11.4); likewise we introduce (−A, ebj A B, −Cebj A ; ebj A Eebj A ) which give Ŵj by (11.4). We then form the block diagonal matrix   aj A bj A aj A bj A aj A aj A bj A bj A (11.25) (−A) ⊕ (−A), e B ⊕ e B, Ce ⊕ (−Ce ); e Ee ⊕ e Ee ⊕m j=1 which gives the required W (x, y) = ⊕m j=1 Wj (x, y) ⊕ Ŵj (x, y) by (11.4), and we verify m   X trace Wj (x, x) + trace Ŵj (x, x) trace W (x, x) = j=1 m =  d X log θ(x − aj ) − log θ(x − bj ) dx j=1 = d log τ (x). dx 34 (11.26) One can check that W satisfies (11.6) with ε replaced by a diagonal matrix with diagonal entries ±1. 12. Linear systems for potentials on hyperelliptic curves In this final section, we extend the analysis of section 11 to (11.2) in the case of g > 1. To obtain a model for the Riemann surface of C, we choose a two-sheeted cover of C with cuts along SB , and introduce the canonical homology basis consisting of: • loops αj that start from [λ2g , ∞), pass along the top sheet to [λ2j−2 , λ2j−1 ], then return along the bottom sheet to the start on [λ2g , ∞); • loops βj that go around the intervals of stability [λ2j−2 , λ2j−1 ] that do not intersect with one another, for j = 1, . . . , g. Then as in [14, p 61], we form the g × 2g Riemann matrix [I; Ω] from the g × g matrix blocks I= hZ αk xj−1 dx ig , y j,k=1 and Ω= hZ βk xj−1 dx ig . y j,k=1 Then the corresponding Riemann theta function is X  Θ(s | Ω) = exp iπhΩn, ni + 2πihs, ni . (12.1) (12.2) n∈Zg Example. Suppose that g = 2 and let  a Ω= b b d  (12.3) where ℑa > 0, ℑd > 0 and b ∈ Q. Then choose p ∈ N such that pb ∈ Z. One can easily check that Θ(s, t | Ω) = p−1 X eπ(ar 2 +2brµ+dµ2 ) 2πrs 2πµt e e r,µ=0 θ(ps + r | p2 a)θ(pt + µ | p2 d). (12.4) Proposition 12.1 Suppose that q is a periodic potential with g spectral gaps, as above, and that Θ( | Ω) is a finite sum of products of Jacobi elliptic functions. Then there exist N < ∞, xj ∈ R, σj ∈ C with ℑσj > 0; and block diagonal matrices (Aj , Bj , Cj ; Ej ) with 2 × 2 diagonal blocks for j = 1, . . . , N , such that θ(x − xj | σj ) is the tau function of (−Aj , Bj , Cj ; Ej ) and q belongs to the field C(θ(x − xj | σj ); j = 1, . . . , N ). Proof. We introduce the product of real ovals q n1 g o Tg = ∆(xj ) + 4 − ∆(xj )2 : λ2j−1 ≤ xj ≤ λ2j : j = 1, . . . , g 2 j=1 35 (12.5) which has dimension g. McKean and van Moerbeke [24, p260] considered the manifold M of all the smooth real one-periodic potentials such that the corresponding Hill’s operator has simple spectrum {λ1 , . . . , λ2g }. Using the KdV hierarchy, they introduced a 1 to 1 differentiable map from M onto Tg , where the tangent vectors on M are differential operators. Let Λ be the lattice generated by the columns of [I; Ω], and note that Cg /Λ is the Jacobi variety of C. They showed that q extends to an abelian function on Cg which is periodic with respect to Λ, hence gives a function of rational character on Cg /Λ. The extended function q belongs to Eg , hence is a theta quotient. Moreover, translation on the potential is equivalent to linear motion on Cg at constant velocity. Thus they solved the inverse spectral problem explicitly, by showing on [24, p.262] that q(x) = g X εj j=0 Θ(X − ωj∗ /2 | Ω)Θ(X − ωj∗∗ /2 | Ω) ∗ /2 | Ω)Θ(X − ω ∗∗ /2 | Ω) Θ(X − ω∞ ∞ (12.6) where X = (x1 , . . . , xg−1 , ax + b) has a, b, x1 , . . . , xg−1 fixed, while x varies, and the con∗ ∗∗ stants εj , ωj∗ , ωj∗∗ , ω∞ and ω∞ are notionally computable. By hypothesis, each factor Θ(X − ω ∗ /2 | Ω) may be written a a finite sum of products of functions such as θ(ax + cj | dj ), and we can apply Theorem 11.3 to each such factor. Weierstrass and Poincaré developed a systematic reduction procedure for such elliptic functions of higher genus, so we can describe the scope of Proposition 12.1. The Siegel upper half-space is Sg = {Ω ∈ Mg×g (C) : Ω = Ωt ; ℑΩ > 0}. (12.7) Let X and J be the 2g × 2g rational block matrices   α β , X= γ δ  0 J= I −I 0  (12.8) such that XJX t = J; the set of all such X is the symplectic group Sp(2g; Q). Now X is associated with the transformation ϕX of Sg given by ϕX (Ω) = (αI + βΩ)−1 (γI + δΩ), (12.9) thus Sp(2g; Q) acts on Sg . Proposition 12.2 [1] (i) Suppose that Ω can be reduced to a diagonal matrix by the action of the symplectic group. Then Θ( | Ω) can be expressed as a sum of products of Jacobian elliptic theta functions. 36 (ii) Condition (i) is equivalent to C being a N -sheeted covering of the one-dimensional complex torus for some N . (iii) The orbit of Sp(2g, Q) that contains iI is dense in Sg . References 1. E.D. Belokolos and V.Z. Enolskii, Reduction of abelian functions and algebraically integrable systems I. Complex analysis and representation theory. J. Math. Sci. (New York) 106 (2001), 3395–3486. 2. M. Bertola, B. Eynard, and J. Harnad, Semiclassical orthogonal polynomials, matrix models and isomonodromic tau functions, Comm. Math. Phys. 263 (2006), 401–437. 3. G. Blower, Integrable operators and the squares of Hankel operators, J. Math. Anal. Appl. 340 (2008), 943–953. 4. G. Blower, Linear systems and determinantal random point fields, J. Math. Anal. Appl. 355 (2009), 311–334. 5. A. Boutet de Monvel, L. Pastur and M. Shcherbina, On the statistical mechanics approach in the random matrix theory: integrated density of states, J. Statist. Physics 79 (1995), 585–611. 6. Y.V. Brezhnev, What does integrability of finite-gap or soliton potentials mean?, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 366 (2008), 923–945. 7. C.-T. Chen, Linear system theory and design, third edition, Oxford University Press, 1999. 8. Y. Chen and A.R. Its, A Riemann–Hilbert approach to the Akhiezer polynomials, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 366 (2008), 973–1003. 9. Y. Chen and N. Lawrence, A generalization of the Chebyshev polynomials, J. Phys. A 35 (2002), 4651–4699. 10. P. Deift, T. Kriecherbauer and K.T. -R. McLaughlin, New results on the equilibrium measure for logarithmic potentials in the presence of an external field, J. Approx. Theory 95 (1998), 388–475. 11. F. Demontis and C. van der Mee, Explicit solutions of the cubic matrix nonlinear Schrödinger equation, Inverse Problems 24 (2008), 025020. 12. L.A. Dikij and I. M. Gelfand, Integrable nonlinear equations and the Liouville theorem, Funct. Anal. Appl. 13 (1979), 6–15. 13. H.M. Farkas and I. Kra, Riemann surfaces, Springer-Verlag, New York, 1980. 14. A.S. Fokas, A.R. Its, A.A. Kapaev and V. Yu. Novkshenov, Painlevé transcendents: the Riemann–Hilbert approach, American Mathematical Society, Providence Rhode Island, 2006. 37 15. P.L. Forrester and N.S. Witte, Random matrix theory and the sixth Painlevé equation, J. Phys. A 39 (2006), 12211–12233. 16. R. Fuchs, Über lineare homogone Differentialgleichungen zweiter Ordnung mit drei in Endlichen gelegenen wesentlich singulären Stellen, Math. Ann. 63 (1907), 301–321. 17. F. Gesztesy and R. Weikard, Elliptic algebro-geometric solutions of the KdV and AKNS hierarchies–an analytic approach, Bull. Amer. Math. Soc. (N.S.) 35 (1998), 271–317. 18. D. Guzzetti, The elliptic representation of the general Painlevé VI equation, Comm. Pure Appl. Math. 55 (2002), 1280–1363. 19. N.J. Hitchin, Riemann surfaces and integrable systems, pp 11–52 in N.J. Hitchin, G.B. Segal and R.S. Ward, Integrable systems: twistors, loop groups and Riemann surfaces, Oxford Science Publications, 1999. 20. M. Jimbo, T. Miwa and K. Ueno, Monodromy preserving deformation of linear ordinary differential equations with rational coefficients, I. General theory and τ function, Phys. D 2 (1981), 306–352. 21. K. Johansson, On fluctuations of eigenvalues of random Hermitian matrices, Duke Math. J. 91 (1998), 151–204. 22. A.P. Magnus, Painlevé -type differential equations for the recurrence coefficients of semi-classical orthogonal polynomials, J.Comp. Appl. Math. 57 (1995), 215–237. 23. R. Maier, Lamé polynomials, hyperelliptic reduction and Lamé band structure, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 366 (2008), 1115–1153. 24. H.P. McKean and P. van Moerbeke, The spectrum of Hill’s equation, Invent. Math. 30 (1975), 217–274. 25. H. McKean and V. Moll, Elliptic curves: Function theory, geometry, arithmetic, Cambridge University Press, 1997. 26. M. L. Mehta, Random matrices, second edition (Academic Press, San Diego 1991). 27. K. Okamoto, On the τ -function of the Painlevé equations, Physica D 2 (1981), 525–535. 28. L.A. Pastur, Spectral and probabilistic aspects of matrix models, pp 205–242, in Algebraic and Geometric Methods in Mathematical Physics, edrs. A. Boutet de Monvel and V.A. Marchenko, Kluwer Acad. Publishers, 1996. 29. E. B. Saff and V. Totik, Logarithmic potentials with external fields, Springer, Berlin 1997. 30. M.F. Singer, Introduction to the Galois theory of linear differential equations, pp. 1–83, in Algebraic theory of Differential Equations, Edrs M.A.H. McCallum and A.V. Mikhailov, London mathematical Society Lecture Notes, Cambridge, 2009. 38 31. C.A. Tracy and H. Widom, Fredholm determinants, differential equations and matrix models, Comm. Math. Phys. 163 (1994), 33–72. 39