Academia.eduAcademia.edu

Variational collocation on finite intervals

2007, Journal of Physics A: Mathematical and Theoretical

In this paper we study a new family of sinc-like functions, defined on an interval of finite width. These functions, which we call "little sinc", are orthogonal and share many of the properties of the sinc functions. We show that the little sinc functions supplemented with a variational approach enable one to obtain accurate results for a variety of problems. We apply them to the interpolation of functions on finite domain and to the solution of the Schrödinger equation, and compare the performance of present approach with others.

Variational collocation on finite intervals Paolo Amore∗ and Mayra Cervantes arXiv:quant-ph/0608069v1 8 Aug 2006 Facultad de Ciencias, Universidad de Colima, and Centro Universitario de Ciencias Basicas, Bernal Díaz del Castillo 340, Colima, Colima, Mexico. Francisco M. Fernández† INIFTA (Conicet,UNLP), Diag. 113 y 64 S/N, Sucursal 4, Casilla de Correo 16, 1900 La Plata, Argentina In this paper we study a new family of sinc–like functions, defined on an interval of finite width. These functions, which we call “little sinc”, are orthogonal and share many of the properties of the sinc functions. We show that the little sinc functions supplemented with a variational approach enable one to obtain accurate results for a variety of problems. We apply them to the interpolation of functions on finite domain and to the solution of the Schrödinger equation, and compare the performance of present approach with others. PACS numbers: 03.30.+p, 03.65.-w I. INTRODUCTION The main goal of this paper is to introduce a new set of orthogonal functions, hereafter called ”little sinc functions” (LSF), and show that they can be used to solve a wide class of problems, such as function interpolation, the numerical solution of differential equations, including the Schrödinger equation. Actually the LSF, which are defined on a finite domain, were first derived by D. Baye [1] in the framework of generalized meshes, but the relation with the usual sinc functions was not recognized. As a matter of fact, in recent years Baye and collaborators have used these and other functions for the solution of the Schrödinger equation with several different potentials, and they have produced accurate numerical results both for the energies and wave functions [1, 2, 3]. On the other hand, sinc collocation methods have been applied to a larger class of problems, which include those mentioned above (see for example [4]), and they have been put on firm mathematical grounds [5]. Examples of applications of the sinc functions can be found in references [6, 7, 8, 9, 10, 11], among others. Recently, one of the authors has shown that the sinc collocation methods can be optimized by using a variational approach [12] based on the Principle of Minimal Sensitivity [13]. This optimization allows one to obtain the highest precision with a given number of grid points, and can be particularly valuable in problems that require intense numerical calculation. We feel that it is important to establish the relation of the LSF with the sinc functions at least for two reasons: first, because it will be possible to generalize the variational approach [12] to the LSF, thus improving the convergence behavior of the method, and second because it offers an interesting link to generalized mesh methods and to open new areas of application of the latter. This paper is organized as follows: in Section II we describe the general properties of the usual sinc functions, defined on the real line. In Section III we derive an expression for the LSF and discuss their properties. In Section IV we solve the Schrödinger equation with two different potentials and compare numerical results obtained with the usual sinc functions and LSF. Finally, in V we draw our conclusions. II. SINC FUNCTIONS In what follows we outline some of the basic properties of the generalized sinc functions, defined as Sk (h, x) ≡ ∗ Electronic † Electronic address: [email protected] address: [email protected] sin (π/h(x − kh)) , π/h(x − kh) (1) 2 where k = 0, ±1, ±2, . . . . The sinc function for a given value of the index k is peaked at xk = kh, where it equals unity, and vanish at the other points xj = jh, with j 6= k and j = 0, ±1, ±2, . . . . From the integral representation Z +π/h h e±i(x−kh)t dt , (2) Sk (h, x) = 2π −π/h we easily derive the normalization factor I1 ≡ Z +∞ Sk (h, x) dx = h , (3) Sk (h, x) Sl (h, x) dx = h δkl . (4) −∞ and the orthogonality property I2 ≡ Z +∞ −∞ It is worth noticing that eq. (2) defines a Dirac delta function in the limit h → 0. A function f (x) analytic on a rectangular strip centered on the real axis can be approximated in terms of sinc functions as [5] f (x) ≈ +∞ X f (kh) Sk (h, x) , (5) k=−∞ which together with the normalization factor can be used to approximate the definite integral Z +∞ ∞ X f (x) dx ≈ h f (k, h). −∞ (6) k=−∞ It is not difficult to derive simple expressions of the derivatives of sinc functions in terms of the same sinc functions: ∞ X d (1) Sk (h, x) = clk Sk (h, x) , dx (7) l=−∞ where (1) ckl ≡ Z +∞ −∞ d Sl (h, x) Sk (h, x) dx = dx ( 0 if k = l if k 6= l k−l 1 (−1) h k−l (8) For the second derivative we have ∞ X d2 (2) S (h, x) = clk Sk (h, x) , k dx2 (9) l=−∞ where (2) clk ≡ Z +∞ −∞ d2 Sk (h, x) dx = Sl (h, x) dx2 ( 2 π − 3h if k = l 2 k−l (−1) 2 − h2 (k−l)2 if k 6= l (10) General expressions for higher order derivatives are also available [11]: r−1 (2r) ≡ (2r) ≡ clk cll (2r+1) clk X 2r! (−1)l−k π 2k (l − k)2k (−1)k+1 2r 2r h (l − k) (2k + 1)! k=0  π 2r (−1)r h 2r + 1 r X (−1)l−k (2r + 1)! 2k ≡ 2r+1 π (l − k)2k (−1)k h (l − k)2r+1 (2k + 1)! (11) (12) (13) k=0 (2r+1) cll with r = 1, 2, . . . . ≡ 0, (14) 3 1 k=9 k=0 k = -9 sk(20,x) k = -1 k=1 0.5 0 -1 0 -0.5 x 0.5 1 FIG. 1: LSF for N = 20 and different values of k (color online). III. LITTLE SINC FUNCTIONS We will now derive the new set of LSF. We consider the orthonormal basis of the wave functions of a particle in a box with infinite walls located at x = ±L:   nπ 1 (x + L) (15) ψn (x) = √ sin 2L L and define      (2N +1)π(x−y)  cos (2N +1)π(x+y) 4L 4L CN  sin     δ N (x, y) = CN − (−1)N ψn (x)ψn (y) = . π(x+y)  4L  sin π(x−y) cos n=1 4L 4L N X (16) where CN is a constant. Because of the completeness of the basis {ψn (x)} we have lim N →∞ δ N (x, y) = δ(x − y) . CN (17) For reasons that will soon become clear, we set CN = 2L N and select even values of N . To simplify the notation h ≡ 2L/N will denote the grid spacing, and yk ≡ 2kL = kh, with k = −N/2 + 1, −N/2 + 2, . . . , N/2 − 1, the grid N points. We then define a set of N − 1 LSF (  π   π ) 1 1 sin 1 + 2N cos 1 + 2N 1 h (x − kh) h (x + kh) sk (h, N, x) ≡ . (18) − π 2N sin 2N cos 2Nπ h (x + kh) h (x − kh) Fig. 1 shows 5 of these functions for N = 20. It is not difficult to prove that the sk are orthogonal Z L sk (h, N, x)sj (h, N, x)dx = h δkj . (19) −L and satisfy sk (h, N, yj ) = δkj , properties that are also shared by the sinc functions. (20) 4 1 k=0 0.8 k = 19 1 0.6 0.5 0.4 0.2 0 0 -0.2 -1 0 -0.5 -1 1 0.5 x 0 -0.5 x 0.5 1 FIG. 2: Comparison between the sinc functions and little sinc functions corresponding to N = 40 (color online). Therefore, it is not surprising that the LSF become the standard sinc functions when N → ∞ and h is held constant in eq. (18): lim sk (h, N, x) = N →∞ sin(π(x − kh)/h) ≡ Sk (h, x) . π(x − kh)/h) (21) This property justifies the name of little sinc functions. Fig. 2 shows the LSF for N = 40 and L = 1 together with the corresponding sinc functions. Differences between both kind of functions are appreciable only in the right plot, corresponding to k = 19. In this case the LSF is slightly larger than unity at the peak and its oscillations die out faster. The LSF share some properties with the sinc functions; for example, we can approximate a function f (x) on the interval (−L, L) as N/2−1 f (x) ≈ X f (xk ) sk (h, N, x) , (22) k=−N/2+1 where xk ≡ kh. Similarly one can express the derivatives of LSF in terms of LSF as dsk (h, N, x) X dsk (h, N, x) ≈ dx dx j d2 sk (h, N, x) X d2 sk (h, N, x) ≈ dx2 dx2 j x=xj x=xj sj (h, N, x) ≡ sj (h, N, x) ≡ X (1) ckj sj (h, N, x) (23) (2) (24) j X ckj sj (h, N, x) , j (r) where the ckj are the counterpart of the coefficients shown above for the sinc functions. An explicit calculation yields   π jπ (1) tan cjj = 4L N      π (j − k)π (j + k)π (1) k−j ckj = (−1) cot + tan 4L 2N 2N (25) (26) and (2) cjj (2) ckj π2 = − 24L2    jπ 2 2 1 + 2N − 3 sec N  2 cos jπ j−k π N  cos = −(−1) π 8L2 cos2 2N (j + k) sin2 (27)  kπ N π 2N (j  . − k) (28) Because of eq. (21) these matrices reduce to the usual sinc expressions given in the preceding section when N → ∞. 5 It is straightforward to generalize from eq. (18), defined in (−L, L), to an arbitrary interval (a, b):    2L a+b x− . s̃k (h, N, x) ≡ sk h, N, b−a 2 (29) In this case the points of the grid are given by xk = (b − a) k a+b + . N −2 2 (30) In order to apply sinc collocation on a finite domain one commonly maps the real line onto a finite interval [5] by means of the conformal transformation   z−a φ(z) = log . (31) b−z This map carries the eye-shaped region DE =     π z−a <d≤ z = x + iy : arg b−z 2 (32) into the infinite strip n πo . DS = w = u + iv : |v| < d ≤ 2 (33) Under the inverse transformation, z = φ−1 (w) the points of the uniform grid on the real axis, given by uk = kh, are mapped onto the non uniform grid defined by the points xk = (a + bekh )/(1 + ekh ). In this case the sinc functions are mapped onto S k (h, x) ≡ sin (π(φ(x) − kh)/h) , π(φ(x) − kh)/h (34) which equals unity at x = xk . Consequently, it is possible to approximate a function in the interval (a, b) as f (x) ≈ f (x) = +N X f (xk ) S k (h, x) . (35) k=−N We can test the performance of the LSF on an example selected from Ref. [9]: f (x) = 2x2 + x − 3x3 (36) where 0 ≤ x ≤ 1. Fig. 3 compares the logarithmic error ∆(x) ≡ log10 f (x) − f (x) for both kind of functions. The solid curve corresponds to 21 LSF, whereas the other three curves correspond to the same number of conformally mapped sinc functions with spacing h = 1/4, 1/2, and 1 respectively. The LSF produce smaller errors and offer the advantage of a uniform grid. Stenger originally introduced sinc methods for the numerical solution of differential equations [6]. Sinc-Galerkin and sinc collocation methods are particularly useful in dealing with these problems since they converge exponentially even in the presence of boundary singularities. It is not our purpose to generalize all the known mathematical results from the sinc functions to the LSF; however, we assume that both kind of functions share similar properties and simply compare our LSF results with those provided by the conformally mapped sinc functions. We consider the example 4.1 of Ref. [8]; that is to say, the inhomogeneous differential equation 1 ′′ ′ − u (x) + u (x) + u(x) = 1 Notice a typo in [8], where 69x must read 62x.  4 25 2 x4 − 2x3 − 29x2 + 62x + 38  (37) 6 0 little sinc conformal mapping ; h = 1/4 conformal mapping ; h = 1/2 conformal mapping ; h = 1 -1 ∆(x) -2 -3 -4 -5 -6 -7 0 0.2 0.4 x 0.8 0.6 1 FIG. 3: Error in the interpolation of f (x) in eq.(36) using 21 functions (color online). with the boundary conditions u(−1) = u(4) = 0. The exact solution to this equation is  2 4 (x + 1)2 (x − 4)2 . uexact (x) = 25 (38) We look for a numerical solution in terms of our LSF as N/2−1 u(x) ≈ uls (x) = X k=−N/2+1 uk sk (h, N, x − 3/2) . (39) Fig. 4 shows global and local errors defined respectively as Z 4 ΞG (N ) ≡ log10 (uexact (x) − uls (x))2 dx −1 = log10 Z 4 −1 N/2−1 u2exact (x) dx − h X u2k (40) k=−N/2+1 ΞL (N ) ≡ log10 |(uexact (3/2) − uls (3/2)| . (41) Notice that at large N the errors appear to decay exponentially. Unlike Ref. [8] no domain decomposition was required to solve this problem. IV. THE SCHRÖDINGER EQUATION As mentioned in the Introduction, sinc collocation methods have been used to obtain accurate numerical solutions to the Schrödinger equation. In this section we wish to extend the variational approach and the results of Ref. [12] to our LSF and generalize the method enclosing potentials with both bound and unbound states. To simplify this presentation we begin with potentials containing only bound states. A. Potentials containing only bound states We consider the one dimensional Schrödinger equation − h̄2 d2 ψn (x) 2 + V (x)ψn (x) = En ψn (x) (42) 7 0 ΞG ΞL -1 Ξ -2 -3 -4 -5 0 20 40 60 80 100 N FIG. 4: Global and local errors (40) and (41), respectively, for the solution of eq. (37) in terms of N − 1 LSF. 1 uexact(x) uls(x) ; N = 4 uls(x) ; N = 6 uls(x) ; N = 10 u(x) 0.8 0.6 0.4 0.2 0 -1 0 1 x 2 3 4 FIG. 5: Comparison between the exact solution utrue (x) (solid line) and the approximations obtained using LSF with N = 4, 6, 10 (color online). on an interval (a, b), which can be either finite or infinite. The wave functions ψn (x) obey the boundary conditions ψn (a) = ψn (b) = 0 which grant that there will be only bound states. If we express the wave functions in terms of our LSF, then equations (22), (24) and (28) allow one to derive the following matrix representation of the hamiltonian operator Hkl = − h̄2 (2) c + δkl V (kh) . 2m kl (43) This equation is similar to eq.(8) of Ref. [12], except for the form of the matrix c(2) . Once the spacing h of the grid is fixed, then the N × N matrix HN for a manifold of N LSF can be diagonalized. In this way one obtains an approximation to the lower part of the spectrum, consisting of the first N eigenvalues and wave functions of the Schrödinger equation (42). Going back to eq. (43) we observe that the precision of the approximate results obtained by diagonalization of the hamiltonian matrix depends crucially on both the number of LSF as well as on the grid spacing. In fact, altough a small spacing can help to increase the resolution, if the number of the sinc functions is not large enough, the approximation will not be able to grasp the natural scale of the problem and the overall precision will be poor. On 8 the other hand, a large spacing will certainly provide poor results, because the details of the problem will not be resolved. On these grounds it is easy to convince oneself that it is likely to exist an optimal spacing, for a given number of functions. Finding this optimal spacing will allow one to reach sufficiently accurate results with a relatively small number of grid points. It was found earlier [12] that an optimal spacing can be obtained by straightforward application of the principle of minimal sensitivity (PMS) to the trace of the N × N hamiltonian matrix TN (h) = T r [HN ]. In fact, given that the trace of a hamiltonian is invariant under unitary transformations, and that in the limit h → 0 it will therefore be independent of h, then the optimal spacing may be given by the solution of the equation 2 d TN (h) = 0. dh (44) Therefore the optimal value of h is found by numerically solving a single algebraic equation, a modest computational task; thus, the interval length L appearing in LSF is treated as a variational parameter. To test the performance of our method we apply it to the first example in Ref. [2], i.e the harmonic oscillator with h̄ = 1, m = 1/2 and ω = 2. Using a cartesian mesh, Baye and Heenen reported errors smaller than 10−3 for the first three eigenvalues and N = 10. On the other hand, our LSF approach with N = 10 (which corresponds to 9 sinc functions), and a grid spacing optimized according to eq. (44), yields errors of −4.86 × 10−6 , 1.2 × 10−4 and −1.6 × 10−3 for the same cases. The authors of Ref. [2] also observe that for N = 50, the high eigenvalues become very sensitive to the value of h, and that the variation of the 30th eigenvalue with respect to h presents a marked minimum around h = 0.35. It is remarkable that the PMS condition for N = 50 yields LP MS = 8.93, corresponding to hP MS = 2LP MS /N ≈ 0.357, which is extremely close to the value quoted by Baye and Heenen. It is worth noticing that while the optimal value of h quoted by Baye and Heenen is the result of an empirical observation, the almost identical value of h given by the PMS is just the numerical solution of the algebraic equation (44), which requires negligible computer time. As a second example of application of the PMS to the LSF collocation method we consider the anharmonic potential V (x) = x2 + x4 (45) and we assume that h̄ = 1 and m = 1/2 in the Schr̈odinger equation. This example was also studied by Baye and Heenen [2] who obtained the optimal values h = 0.55 for N = 10, and h = 0.2 for N = 50 using a cartesian mesh, which is somehow related to the LSF. Fig. 6 shows the error |E0exact − E0approx | as a function of the parameter L for three different values of N (remember that the number of LSF in the expansion is N − 1). The plus symbols in the plot correspond to the predictions of the PMS condition, which generally fall close to the minimum of the curve, while the square symbols correspond to the solutions of a sort of empirical PMS condition, obtained by minimizing the modified trace N −2X T̃N (h) = TN (h) − V (xk ) . (46) 2N k with respect to L. The better behavior of the modified PMS condition for just one state (as in Fig. 6) should not confuse the reader. It must be kept in mind that the PMS minimizes a sort of global error for all the states in the PN −1 (N ) (exact) chosen manifold. In order to appreciate this point clearly, Fig. 7 shows the global error σ = n=0 |En − En | 4 as a function of L for the potential x /4 and N = 20. Notice that in this case the PMS condition yields the minimal (exact) (60) error. The exact energies En were simply chosen to be those given by the method at higher order: En . The accuracy of present calculations is greater than that obtained earlier for the same problem [2], where the authors report errors of the order of 10−5 and 10−12 for N = 10 and N = 50, respectively. Fig. 6 shows that the curves with different values of N overlap in the region of small L, which suggests that the approach may not be taking into account the large–L region correctly. B. Potentials containing bound and unbound states In what follows we try and show that the LSF are suitable for the treatment of potentials with bound and unbound states. The application of the PMS condition to potentials with both discrete and continuous spectra is not 2 Notice that such property was invoked earlier in Ref. [14] when using a basis of harmonic oscillator wave functions depending on an arbitrary scale parameter 9 4 10 N = 10 N = 20 N = 30 2 10 0 10 -2 10 -4 10 Error -6 10 -8 10 -10 10 -12 10 -14 10 -16 10 -18 10 -20 10 0 2 4 8 6 L FIG. 6: Error in the ground–state energy of the potential (45) using N = 10, 20 and 30, as a function of the parameter L. The plus and square symbols correspond to the PMS and modified PMS conditions, respectively (color online). 6 10 PMS Modified PMS 5 10 4 10 σ 3 10 2 10 1 10 0 10 0 1 2 3 4 5 L FIG. 7: Global error σ ≡ P18 n=0 (20) |En (60) − En | for the potential V (x) = x4 /4 (color online). straightforward. The potentials treated in the preceding section increase with the coordinate and therefore the matrix representation of those potentials increase with L. Since the matrix representation of the kinetic energy decreases with L then there is a minimum in the trace of the Hamiltonian matrix and the PMS gives a balance between the traces of the kinetic and potential energies. That minimum provides the natural length scale for the application of the method. If the potential energy tends to a finite constant value as the coordinate increases, then there may not be a minimum and the PMS will not yield the length scale for the application of present approach. In order to overcome that problem we substitute a potential Ṽ (x) into the Schrödinger equation that behaves exactly as the original potential V (x) in the relevant coordinate region and increases to infinity at large distances. The error introduced by such potential substitute will be negligible if the difference between the original and substitute potentials is appreciable only where the wavefunctions are expected to be vanishing small. This substitution removes the continuous spectrum but should not affect the discrete one too much. The advantage is that we are thus able to apply the PMS condition exactly as in the preceding subsection. The potential substitute is introduced with the sole purpose of obtaining the length scale and we diagonalize the correct hamiltonian matrix. We will illustrate our procedure on the Morse potential already treated earlier by means of the Lagrange mesh 10 TABLE I: Errors ǫn ≡ Enapprox − Enexact for some states of the Morse potential. Powers of ten are indicated between square brackets. l 0 n 0 5 1 0 5 2 0 5 N 20 40 20 40 20 40 20 40 20 40 20 40 Ref.[3] 3.8[−7] < 1[−14] 4.0[−3] 1.1[−9] 4.1[−7] < 1[−14] 4.4[−3] 1.1[−9] 4.5[−7] < 1[−14] 4.7[−3] 1.1[−9] PMS −3.3[−10] 1.8[−20] 2.1[−6] 2.5[−14] −3.6[−10] −1.6[−20] 1.5[−6] 2.9[−14] −4.1[−10] 1.2[−20] 2.0[−7] 3.9[−14] hP M S 0.223 0.134 0.223 0.134 0.223 0.134 0.223 0.134 0.223 0.134 0.223 0.134 method [3]: V (r) = D i h e−2a(r−re ) − 2e−a(r−re) , (47) where D = 0.10262, re = 2, a = 0.72, 2m = 1836 and h̄ = 1. In the case of states with nonzero angular momentum 2 l(l+1) we should add the centrifugal potential h̄ 2mr 2 , l = 0, 1, 2, . . . . The potential substitute Ṽ (r) is arbitrary; we can, for example, choose it to be the Taylor expansion of V (r) around a given point, and truncate it at a sufficiently large order to be accurate enough at small r, and at the same time to satisfy limr→∞ Ṽ (r) = +∞. For example, in present particular case we can choose: Ṽ (r) = 20 X 1 dn V n! drn n=0 r=10 (r − 10)n . (48) Another difficulty to take into consideration is that our LSF are defined in (−L, L) whereas the radial coordinate is defined in (0, +∞). The obvious solution to this apparent problem would be using the form of Ref. [1], where the left boundary of the interval is 0; on the other hand, such choice would be equivalent to a shift by a proper amount of the potential, thus bringing the boundary condition on the left point to coincide with the left point of the LSF. Despite its simplicity, this procedure is not optimal and generally does not provide the best results. We have found out that a more convenient strategy consists of keeping our LSF unchanged and shifting the coordinate by given amount as V (r) → V (r + r), where r is typically close to the minimum of the potential (the same shift should be applied to the centrifugal energy when necessary). The PMS applied to the shifted hamiltonian will thus provide the optimal scale for the application of the LSF collocation method. One advantage of that procedure is that it maximizes the sampling of the classically allowed region, where the bound state wave functions exhibit marked nonzero contributions. Of course, in order to take into account the boundary condition at r = 0, the PMS length scale has to be smaller than r. Table I shows the errors ǫn ≡ Enapprox − Enexact for the s, p, and d states of the Morse potential with n = 0 and n = 5. It compares present results with those of Baye et al [3]. The last column of Table I displays the PMS optimal values of the grid spacing. Fig. 8 shows the local error ηN (x) = |ψ (N ) (x) − ψ (80) (x)| for the ground state of the Morse potential and approximations N = 20 and N = 40, where we have chosen r = 3. We assume that the approximation of order N = 80 is sufficiently close to the exact wavefunction. Our results are more accurate than those of Baye et al [3] who essentially considered the average of η(x) over a chosen region. The sharp drop of the error beyond a certain value of r, clearly noticeable in Fig. 8, is due to the fact that our LSF reproduce the wave function only in a finite region outside which η(x) is merely the value of ψ (80) (x). It is worth noticing that ηN (x) is almost uniform in the region covered by the LSF. 11 -4 10 -5 10 N = 20 N = 40 -6 10 -7 10 -8 10 -9 10 -10 η(x) 10 -11 10 -12 10 -13 10 -14 10 -15 10 -16 10 -17 10 -18 10 -19 10 -20 10 0 2 4 x 6 8 FIG. 8: Error ηN (x) = |ψ (N) (x) − ψ (80) (x)| as a function of x for the ground state of the Morse potential (color online). We have also applied our method to the one–dimensional Morse potential considered by Wei [15]:   V (x) = D e−2αx − 2e−αx + 1 , (49) where −∞ < x < ∞, D = 0.0224, α = 0.9374, m = 119406 and h̄ = 1. This problem can be solved exactly and the energies are given by [16] "  2 # 1 h̄ω 1 n+ (50) En = h̄ω n + − 2 4D 2 p where ω = 2D/m α. As before we can improve our results by conveniently shifting the potential on the x-axis from V (x) to V (x + x). When N = 20 and assuming x = 0 we have found that the first–excited–state eigenvalue is reproduced with an accuracy of about 8 × 10−14 , which is even better than the accuracy obtained by Wei with N = 64 3 . When N = 40 the error of our method is just 8.1 × 10−21 for the ground state. V. CONCLUSIONS In this paper we have introduced a new class of sinc-like functions, which we name “little sinc functions” (LSF), which share some properties of the usual sinc functions, but are defined on finite intervals. We have shown that the LSF collocation method provides accurate approximations for a wide class of problems when it is supplemented by the Principle of Minimal Sensitivity (PMS) that provides an optimal grid spacing. In particular, we have applied the LSF to the solution of the Schrödinger equation with only bound states and with mixed discrete and continuous spectra. We have chosen benchmark models treated earlier by other authors and in all the cases we obtained more accurate results. It seems that the LSF collocation method is an interesting alternative algorithm for solving many mathematical and physical problems. 3 Notice that Table I of Ref. [15] omits the the ground state of the model. 12 Acknowledgments P.A. acknowledges support of Conacyt grant no. C01-40633/A-1. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] D. Baye, Constant-step Lagrange meshes for central potentials, Journal of Physics B 28, 4399-4412 (1995) D. Baye and P.H. Heenen, Generalised meshes for quantum mechanical problems, Journal of Physics A 2041-2059 (1986) D. Baye, M.Hesse and M.Vincke, The unexplained accuracy of the Lagrange-mesh method, Phys.Rev.E 65, 026701 (2002) V.G. Koures and F. E. Harris, International Journal of Quantum Chemistry 30,1311 (1996) F. Stenger, Numerical methods based on sinc and analytic functions, Springer-Verlag,New York, 1993 F. Stenger, A sinc-Galerkin method of solution of boundary value problems, Math. Comp. 33 , 85-109 (1979) A.C. Morlet, Siam Journal on Numerical Analysis 32 (1995) 1475-1503 N.J. Lybeck and K.L. Bowers, Sinc methods for domain decomposition, Applied Mathematics and Computation 75 (1996) 13-41 S. Narashiman, J. Majdalani and F. Stenger, A first step in applying the sinc collocation method to the nonlinear NavierStokes equations, Numerical Heat transfer 41 (2002) 447-462 R. Easther, G. Guralnik and S. Hahn, Phys. Rev. D 61, 125001 (2000) R. Revelli and L. Ridolfi, Computers and mathematics with applications 46, 1443-1453 (2003) P. Amore, A variational sinc collocation method for strong coupling problems, Journal of Physics A 39, L349-L355 (2006) P. M. Stevenson, Phys. Rev. D 23, 2916 (1981). P. Amore, A. Aranda, H.F. Jones and F.M. Fernández, A new approximation method for time independent problems in quantum mechanics, Physics Letters A 340, 201-208 (2005) G.W. Wei, Solving quantum eigenvalue problems by discrete singular convolution, Journal of Physics B 33, 343-352 (2000) F.M. Fernández, Introduction to perturbation theory in quantum mechanics, CRC press (2001)