Paper The following article is Open access

Periodic quantum graphs with predefined spectral gaps

Published 1 September 2020 © 2020 The Author(s). Published by IOP Publishing Ltd
, , Citation Andrii Khrabustovskyi 2020 J. Phys. A: Math. Theor. 53 405202 DOI 10.1088/1751-8121/aba98b

1751-8121/53/40/405202

Abstract

Let Γ be an arbitrary ${\mathbb{Z}}^{n}$-periodic metric graph, which does not coincide with a line. We consider the Hamiltonian ${\mathcal{H}}_{\varepsilon }$ on Γ with the action −ɛ−1d2/dx2 on its edges; here ɛ > 0 is a small parameter. Let $m\in \mathbb{N}$. We show that under a proper choice of vertex conditions the spectrum $\sigma \left({\mathcal{H}}_{\varepsilon }\right)$ of ${\mathcal{H}}_{\varepsilon }$ has at least m gaps as ɛ is small enough. We demonstrate that the asymptotic behavior of these gaps and the asymptotic behavior of the bottom of $\sigma \left({\mathcal{H}}_{\varepsilon }\right)$ as ɛ → 0 can be completely controlled through a suitable choice of coupling constants standing in those vertex conditions. We also show how to ensure for fixed (small enough) ɛ the precise coincidence of the left endpoints of the first m spectral gaps with predefined numbers.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Introduction

Traditionally the name quantum graph refers to a pair $\left({\Gamma},\mathcal{H}\right)$, where Γ is a network-shaped structure of vertices connected by edges of certain positive lengths (metric graph) and $\mathcal{H}$ is a second order self-adjoint differential operator on Γ (Hamiltonian). Hamiltonians are determined by differential operations on the edges and certain interface conditions at the vertices. We refer to the monograph [5] for a broad overview and an extensive bibliography on this topic.

Quantum graphs arise naturally in mathematics, physics, chemistry and engineering as simplified models of wave propagation in quasi-one-dimensional systems looking like narrow neighborhoods of graphs. Typical applications include quantum wires [23, 24], photonic crystals [29, 30], graphene and carbon nanostructures [21, 31], quantum chaos [25, 26] and many other areas. For more details concerning origins of quantum graphs see [27] and [5, chapter 7].

In various applications (for example, to aforementioned graphene and carbon nano-structures, and photonic crystals) periodic infinite graphs are studied. In what follows in order to simplify the presentation (but without any loss of generality) we assume that our graphs are embedded into ${\mathbb{R}}^{d}$ for some $d\in \mathbb{N}$. An infinite metric graph ${\Gamma}\subset {\mathbb{R}}^{d}$ is said to be ${\mathbb{Z}}^{n}$ -periodic (nd) if it invariant under translations through some linearly independent vectors ${\nu }_{1},\dots ,{\nu }_{n}\in {\mathbb{R}}^{d}$. The Hamiltonian $\mathcal{H}$ on a ${\mathbb{Z}}^{n}$-periodic metric graph Γ is said to be periodic if it commutes with these translations.

It is well-known that the spectrum of a periodic Hamiltonian on a periodic metric graph can be represented as a locally finite union of compact intervals (spectral bands). The bounded open interval is called a gap if it has an empty intersection with the spectrum, but its ends belong to it. The band structure of the spectrum suggests that gaps may exist in principle. In general, however, the presence of gaps is not guaranteed: two spectral bands may overlap, and then the corresponding gap disappears. For instance, if Γ is a rectangular lattice, $\mathcal{H}$ is defined by the operation −d2/dx2 on the edges and the standard Kirchhoff conditions at the vertices, then $\sigma \left(\mathcal{H}\right)$ has no gaps—it coincides with [0, ).

Existence and locations of spectral gaps are of primary interest because of various applications, for example in physics of photonic crystals—periodic nanostructures, whose characteristic property is that the light waves at certain optical frequencies fail to propagate in them, which is caused by gaps in the spectrum of the Maxwell operator or related scalar operators. For more details we refer to [29, 30], where periodic high contrast photonic and acoustic media are studied in high contrast regimes leading to appearance of Dirichlet-to-Neumann type operators on periodic graphs.

To create spectral gaps one can use geometrical means. For example, given a fixed graph we 'decorate' it changing its geometrical structure at each vertex: either one attaches to each vertex a copy of certain fixed compact graph [28] (see also [39] where similar idea was used for discrete graphs) or in each vertex one disconnects the edges emerging from it and then connects their loose endpoints by a certain additional graph ('spider') [8, 37].

Another way to open spectral gaps is to use 'advanced' vertex conditions. For example, as we already noted the spectrum of the Kirchhoff Laplacian on a rectangular lattice has no gaps, however (see [9]) if we replace Kirchhoff conditions by the so-called δ-conditions of the strength α ≠ 0 one immediately gets infinitely many gaps provided the lattice-spacing ratio is a rational number.

Further results on spectral gaps opening for periodic quantum graphs as well as on various estimates on their location and lengths can be found in [1, 3, 6, 10, 13, 14, 2022, 3136].

When designing materials with prescribed properties it is desirable not only to open up spectral gaps, but also be able to control their location and length—via a suitable choice of operator coefficients or/and geometry of the medium. We addressed this problem for various classes of periodic operators in a series of papers [4, 11, 1719]. In particular, periodic quantum graphs were treated in [4]. In [4] the required structure for the spectrum is achieved via the combination of two approaches described above: taking a fixed periodic graph Γ0 we decorate it attaching to each period cell m compact graphs Yij ; here j = 1, ..., m, while the subscript $i\in {\mathbb{Z}}^{n}$ indicates to which period cell we attach Yij (see figure 1, here m = 2). On Γ we considered the Hamiltonian ${\mathcal{H}}_{\varepsilon }$ defined by the operation −ɛ−1d2/dx2 on the edges and the Kirchhoff conditions in all its vertices except the points of attachment of Yij to Γ0—in these points we pose (a kind of) δ'-conditions 1 . Note, that the vertex conditions we dealt with in [4] 'generate' only Hamiltonians with $\mathrm{inf}\left(\sigma \left({\mathcal{H}}_{\varepsilon }\right)\right)=0$. It was proven that $\sigma \left({\mathcal{H}}_{\varepsilon }\right)$ has at least m gaps for small enough ɛ, these gaps converge (as ɛ → 0) to some intervals (Aj , Bj ) ⊂ [0, ) whose location and lengths can be nicely controlled by a suitable choice of coupling constants standing in those δ'-conditions and a suitable choice of the 'sizes' of attached graphs Yij .

Figure 1.

Figure 1. Example of a periodic graph utilized in [4]. Reproduced from [4]. © IOP Publishing Ltd. All rights reserved.

Standard image High-resolution image

In the current paper we continue the research started in [4]. We will prove that the required structure of the spectrum can be achieved solely by an appropriate choice of vertex conditions without any assumptions on the graph geometry. Namely, let Γ be a ${\mathbb{Z}}^{n}$-periodic metric graph. The only assumption we impose on it is that Γ does not coincide with a line. On Γ we consider the Hamiltonian ${\mathcal{H}}_{\varepsilon }$ defined by the operation −ɛ−1d2/dx2 on edges and either Kirchhoff, δ or δ'-type (different from those treated in [4]) conditions at vertices—see (1.7)–(1.9). We prove that $\sigma \left({\mathcal{H}}_{\varepsilon }\right)$ has at least m gaps; when ɛ → 0 the first m gaps (respectively, the infimum of $\sigma \left({\mathcal{H}}_{\varepsilon }\right)$) converge to some intervals $\left({A}_{j},{B}_{j}\right)\subset \mathbb{R}$, j = 1, ..., m (respectively, to some number ${B}_{0}\in \mathbb{R}$); the location of Aj , j = 1, ..., m and Bj , j = 0, ..., m depends in an explicit way on the coupling constants standing in δ and δ'-type vertex conditions; see theorem 1.1. Moreover, choosing these coupling constants in a proper way one can completely control Aj and Bj making them coincident with predefined numbers; see theorem 3.1. Note, that in contrast to [4], the limiting intervals and the bottom of the spectrum do not necessarily lie on the positive semi-axis—the numbers Aj , Bj are also allowed to be negative. Finally we show that for fixed (small enough) ɛ one can guarantee the precise coincidence of the left endpoints of the first m gaps with prescribed numbers; see theorem 3.2.

The method we use to prove the convergence of spectra is different from the one used in [4], where we utilized Simon's result [40] about monotonic sequences of forms. In the current work we apply the abstract lemma from [12] serving to compare eigenvalues of two self-adjoint operators acting in different Hilbert spaces. The advantage of this approach is that we are able not only to prove the convergence of spectra, but also to estimate the rate of convergence.

The structure of the paper is as follows. In section 1 we introduce the Hamiltonian ${\mathcal{H}}_{\varepsilon }$ and formulate the main convergence result. Its proof is given in section 2. In section 3 we demonstrate how to control the location of spectral gaps.

1. Setting of the problem and main result

1.1. Metric graph Γ

Let $n\in \mathbb{N}$ and let Γ be an arbitrary connected ${\mathbb{Z}}^{n}$-periodic locally finite metric graph. The only assumption we impose on the geometry of Γ is that it does not coincide with a line (see the footnote 2 explaining the role of this assumptions) and its fundamental domain is compact (see below). W.l.o.g. (cf the discussion after definition 4.1.1 in [5]) one can assume that Γ is embedded into ${\mathbb{R}}^{d}$ with d = n as n ⩾ 3 and d = 3 as n = 1, 2. We also assume that Γ has no loops—otherwise one can break them into pieces by introducing a new intermediate vertex.

By ${\mathcal{E}}_{{\Gamma}}$ and ${\mathcal{V}}_{{\Gamma}}$ we denote the sets of edges and vertices of Γ, respectively. By l = l(e) we denote the function assigning to the edge e its length l(e). We assume that l(e) < for each $e{\in \mathcal{E}}_{{\Gamma}}$. In a natural way we introduce on each edge $e{\in \mathcal{E}}_{{\Gamma}}$ the local coordinate xe ∈ [0, l(e)], so that xe = 0 and xe = l(e) correspond to the endpoints of e. For $v\in {\mathcal{V}}_{{\Gamma}}$ we denote by $\mathcal{E}\left(v\right)$ the set of edges emanating from v.

The ${\mathbb{Z}}^{n}$-periodicity of Γ means that

for some linearly independent vectors ${\nu }_{1},\dots ,{\nu }_{n}\in {\mathbb{R}}^{d}$. Let us introduce for $i=\left({i}_{1},\dots ,{i}_{n}\right)\in {\mathbb{Z}}^{n}$ the mapping i⋅ :  Γ → Γ defined by

Equation (1.1)

We denote by Y a fundamental domain of Γ, i.e. a compact set (see the assumption above) satisfying

In particular, the above condition implies that the vertices on the boundary of the fundamental domain cannot have any common edges. Evidently a fundamental domain in not uniquely defined. Note that for any

the graph Γ is also invariant under translations through vectors ${\nu }_{1}^{r},\dots ,{\nu }_{n}^{r}$ defined by ${\nu }_{k}^{r}={r}_{k}{\nu }_{k}$ The corresponding fundamental domain is the set Yr given by

Equation (1.2)

Finally, we denote by ${\mathcal{U}}_{Y}$ the set of points of a fundamental domain Y that simultaneously belong to 'neighboring' fundamental domains, i.e.

An example of a ${\mathbb{Z}}^{2}$-periodic graph is presented on figure 2(a). This is an equilateral hexagonal lattice in ${\mathbb{R}}^{2}$, which is invariant under translations through vectors ${ \overrightarrow {\nu }}_{1}=\left(\sqrt{3},0\right)$, ${ \overrightarrow {\nu }}_{2}=\left(-\frac{\sqrt{3}}{2},\frac{3}{2}\right)$. Its fundamental domain Y is highlighted in bold lines. On figure 2(b) one sees the fundamental domain Yr (1.2) for r = (2, 2). On these figures the bold dots are vertices belonging to ${\mathcal{U}}_{Y}$ and ${\mathcal{U}}_{{Y}^{r}}$, respectively.

Figure 2.

Figure 2. (a) ${\mathbb{Z}}^{2}$-periodic graph Γ and its fundamental domain. (b) The fundamental domain Yr for r = (2, 2). (c) Decomposition of Yr for m = 3.

Standard image High-resolution image

1.2. Decomposition of a fundamental domain

It is easy to see that for any $m\in \mathbb{N}$ there exists such $r=\left({r}_{1},\dots ,{r}_{n}\right)\in {\mathbb{N}}_{0}^{n}$ that the fundamental domain Yr (1.2) can be represented as a union

Equation (1.3)

of non-empty compact sets Yj , j = 0, ..., m satisfying the following conditions:

Equation (1.4)

W.l.o.g. we may assume that the points belonging to ${\mathcal{V}}_{j}$ are the vertices of Γ (if $v\in {\mathcal{V}}_{j}$ lies on the interior of an edge of Γ we can regard it as a vertex with two outgoing edges). It is easy to see that such a decomposition is always possible for large enough r1, r2, ..., rn 2 . Of course such a decomposition is not unique. For example on figure 2(c) the domain Yr is decomposed in such a way that (1.3) and (1.4) with m = 3 holds: Y0 consists of bold solid lines, while Y1, Y2, Y3 consist of one dashed edge, the black square is $\tilde {v}$, the white circles indicate vertices belonging to ${\mathcal{V}}_{j}$ (on the figure each ${\mathcal{V}}_{j}$, j = 1, 2, 3 consists of two vertices denoted by vj1 and vj2).

Now, let $m\in \mathbb{N}$ be given and let us fix such $r=\left({r}_{1},\dots ,{r}_{n}\right)\in {\mathbb{N}}_{0}^{n}$ that the fundamental domain Yr admits representation (1.3) and (1.4). We set for $i\in {\mathbb{Z}}^{n}$:

where the mapping ir ⋅ :  Γ → Γ is defined by ${i}^{r}\cdot x=x+\sum _{k=1}^{n}{i}_{k}{r}_{k}{\nu }_{k},\quad x\in {\Gamma}.$

The vertices belonging to ${\mathcal{V}}_{ij}$ will support δ'-type conditions, the vertices ${\tilde {v}}_{i}$ will support δ-conditions, in the remaining vertices the Kirchhoff conditions will be posed.

1.3. Functional spaces

In what follows if $u:{\Gamma}\to \mathbb{C}$ and $e{\in \mathcal{E}}_{{\Gamma}}$ then by ue we denote the restriction of u onto the interior of e. Via a local coordinate xe we identify ue with a function on (0, l(e)).

The space L 2(Γ) consists of functions $u:{\Gamma}\to \mathbb{C}$ such that ue ∈ L 2(0, l(e)) for each edge e and

The space ${\tilde {\mathsf{H}}}^{k}\left({\Gamma}\right)$, $k\in \mathbb{N}$ consists of functions $u:{\Gamma}\to \mathbb{C}$ such that ue belongs to the Sobolev space H k (0, l(e)) for each edge e and

By ${\mathsf{H}}_{\mathfrak{h}}^{1}\left({\Gamma}\right)$ we denote a subspace of ${\tilde {\mathsf{H}}}^{1}\left({\Gamma}\right)$ consisting of such function $u\in {\tilde {\mathsf{H}}}^{1}\left({\Gamma}\right)$ that

  • if $v\in {\mathcal{V}}_{{\Gamma}}{\backslash}\left({\cup }_{i\in {\mathbb{Z}}^{n}}{\cup }_{j=1}^{m}{\mathcal{V}}_{ij}\right)$ then u is continuous at v, i.e. the limiting value of u(x) when x approaches v along $e\in \mathcal{E}\left(v\right)$ is the same for each $e\in \mathcal{E}\left(v\right)$. We denote this value by u(v);
  • if $v\in {\mathcal{V}}_{ij}={Y}_{ij}\cap {Y}_{i0}$ for some $i=\left({i}_{1},\dots ,{i}_{n}\right)\in {\mathbb{Z}}^{n}$, j ∈ {1, ..., m} then
    • 1  
      the limiting value of u(x) when x approaches v along $e\in \mathcal{E}\left(v\right)\cap {Y}_{i0}$ is the same for each $e\in \mathcal{E}\left(v\right)\cap {Y}_{i0}$. We denote this value by u0(v).
    • 2  
      the limiting value of u(x) when x approaches v along $e\in \mathcal{E}\left(v\right)\cap {Y}_{ij}$ is the same for each $e\in \mathcal{E}\left(v\right)\cap {Y}_{ij}$. We denote this value by uj (v).

1.4. Operator ${\mathcal{H}}_{\varepsilon }$

Let ɛ > 0 be a small parameter. In L 2(Γ) we introduce the quadratic form ${\mathfrak{h}}_{\varepsilon }$,

Equation (1.5)

on the domain $\mathrm{d}\mathrm{o}\mathrm{m}\left({\mathfrak{h}}_{\varepsilon }\right)={\mathsf{H}}_{\mathfrak{h}}^{1}\left({\Gamma}\right)$. Here αj , βj , γ are real constants, moreover αj ≠ 0, βj ≠ 0 [this assumption is needed to avoid the decoupling at the vertex v, cf (1.9)]. These constants are at our disposal and they will be specified later in section 3. The second and third terms in the right-hand side of (1.5) are indeed finite on $u\in {\tilde {\mathsf{H}}}^{1}\left({\Gamma}\right)$, this follows easily from the trace inequality [5, lemma 1.3.8]

and periodicity of Γ. It is also straightforward to verify that the form ${\mathfrak{h}}_{\varepsilon }$ is densely defined in L 2(Γ), lower semibounded and closed. By the first representation theorem [16, theorem 6.2.1] there exists the unique self-adjoint operator ${\mathcal{H}}_{\varepsilon }$ associated with the form ${\mathfrak{h}}_{\varepsilon }$, i.e.

Equation (1.6)

where ${\mathfrak{h}}_{\varepsilon }\langle u,w\rangle $ is the sesquilinear form, which corresponds to the quadratic form (1.5).

The domain of ${\mathcal{H}}_{\varepsilon }$ consists of functions $u\in {\mathsf{H}}_{\mathfrak{h}}^{1}\left({\Gamma}\right)\cap {\tilde {\mathsf{H}}}^{2}\left({\Gamma}\right)$ satisfying

Equation (1.7)

Equation (1.8)

Equation (1.9)

where xe ∈ [0, l(e)] is a natural coordinate on $e\in \mathcal{E}\left(v\right)$ such that xe = 0 at v. The action of ${\mathcal{H}}_{\varepsilon }$ is

Equation (1.10)

Condition (1.7) is usually referred as Kirchhoff coupling (sometimes the name Neumann coupling is used), condition (1.8) is known as δ-coupling of the strength γɛ. We may refer to conditions (1.9) as δ'-type coupling. The reason for this is as follows. Suppose that $v\in {\mathcal{V}}_{ij}$ has only two outgoing edges $e\in \mathcal{E}\left(v\right)\cap {Y}_{i0}$ and $\tilde {e}\in \mathcal{E}\left(v\right)\cap {Y}_{ij}$. Also let βj = 1. Then conditions (1.9) are equivalent to

Taking into account the definition of coordinates xe and ${\mathbf{x}}_{\tilde {e}}$ we conclude that (1.9) coincides with the usual δ'-conditions of the strength ${\left({\alpha }_{j}\varepsilon \right)}^{-1}$ at a point on the line [2, section 1.4].

1.5. Main results

We denote

Equation (1.11)

Then for j = 1, ..., m we set

Equation (1.12)

where αj , βj are real non-zero constants from (1.5). We assume that Aj are pairwise distinct; in this case we can renumber them in such a way that

Equation (1.13)

Finally, we consider the following equation (for unknown $\lambda \in \mathbb{C}{\backslash}\left\{{A}_{1},\dots ,{A}_{m}\right\}$):

Equation (1.14)

where γ is a real constant from (1.5). It is easy to show that this equation has exactly m + 1 roots Bj , j = 0, ..., m, they are real, moreover (after an appropriate renumeration) these roots satisfy

Equation (1.15)

We are now in position to formulate the first main result of this work.

Theorem 1.1. There exist such positive constants Λ0 (depending on Y) and CA , CB , ɛ0 (depending on αj , βj , γ and Y) that for all ɛ < ɛ0 the spectrum of ${\mathcal{H}}_{\varepsilon }$ has the following structure within (−, Λ0 ɛ−1]:

Equation (1.16)

where the numbers Aj,ɛ , j = 1, ..., m and Bj,ɛ , j = 0, ..., m satisfy

Equation (1.17)

moreover

Equation (1.18)

the numbers Aj , Bj are specified by (1.12)–(1.14).

2. Proof of theorem 1.1

2.1. Preliminaries

To simplify the notations we assume that the fundamental domain Yr admits representation (1.3) and (1.4) for r = 0, i.e., already the initial fundamental domain Y admits such a representation. In the general case one should simply change the notations accordingly.

In the following if $\mathcal{H}$ is a self-adjoint lower semi-bounded operator with purely discrete spectrum, we denote by ${\left\{{\lambda }_{k}\left(\mathcal{H}\right)\right\}}_{k\in \mathbb{N}}$ the sequence of its eigenvalues arranged in the ascending order and repeated according to their multiplicity.

The Floquet–Bloch theory [5, chapter 4] establishes a relationship between the spectrum of the operator ${\mathcal{H}}_{\varepsilon }$ and the spectra of certain operators ${\mathcal{H}}_{\varepsilon }^{\theta }$ in L 2(Y). Namely, let

We denote by ${\mathsf{H}}_{\mathfrak{h}}^{1,\theta }\left({\Gamma}\right)$ the set of such functions $u:{\Gamma}\to \mathbb{C}$ that ue ∈ H 1(0, l(e)) for each $e\;{\in \mathcal{E}}_{{\Gamma}}$, u satisfy the same conditions at vertices of Γ as functions from ${\mathsf{H}}_{\mathfrak{h}}^{1}\left({\Gamma}\right)$, and

[recall, that the mapping i⋅ : Γ → Γ is defined by (1.1)]. We introduce the quadratic form ${\mathfrak{h}}_{\varepsilon }^{\theta }$ by

Equation (2.1)

Hereinafter by ${\mathcal{E}}_{Y}$ and ${\mathcal{V}}_{Y}$ we denote the set of edges and vertices of Y, respectively; similar notations will be used for Yj . The form ${\mathfrak{h}}_{\varepsilon }^{\theta }$ is densely defined in L 2(Y), lower semibounded and closed. We denote by ${\mathcal{H}}_{\varepsilon }^{\theta }$ the operator associated with ${\mathfrak{h}}_{\varepsilon }^{\theta }$. The spectrum of ${\mathcal{H}}_{\varepsilon }^{\theta }$ is purely discrete, moreover for each $k\in \mathbb{N}$ the function $\theta {\mapsto}{\lambda }_{k}\left({\mathcal{H}}_{\varepsilon }^{\theta }\right)$ is continuous. Consequently, the set

Equation (2.2)

According to the Floquet–Bloch theory we have the following representation:

Equation (2.3)

Along with ${\mathfrak{h}}_{\varepsilon }^{\theta }$ we also introduce the forms ${\mathfrak{h}}_{\varepsilon }^{N}$ and ${\mathfrak{h}}_{\varepsilon }^{D}$ acting on the domains

and with the action being again specified by (2.1). By ${\mathcal{H}}_{\varepsilon }^{N}$ and ${\mathcal{H}}_{\varepsilon }^{D}$ we denote the associated operators. The spectra of these operators are purely discrete. It is easy to see that

whence, using the min–max principle [7, section 4.5], we conclude

Equation (2.4)

In the following we mostly use two distinguished points of ${\mathbb{T}}^{n}$,

Equation (2.5)

The subscripts p and a means periodic and antiperiodic, respectively.

Remark 2.1. The main ingredients for the proof of theorem 1.1 are two-side estimates for the eigenvalues ${\lambda }_{k}\left({\mathcal{H}}_{\varepsilon }^{\theta }\right)$, ${\lambda }_{k}\left({\mathcal{H}}_{\varepsilon }^{N}\right)$, ${\lambda }_{k}\left({\mathcal{H}}_{\varepsilon }^{D}\right)$, see lemmata 2.1, 2.32.6 below. The proof of these estimates is based on the standard min–max principle [7, theorem 4.5.3] and the result [12, lemma 2.1] both requiring no information on the structure of the domains of the operators ${\mathcal{H}}_{\varepsilon }^{\theta }$, ${\mathcal{H}}_{\varepsilon }^{N}$, ${\mathcal{H}}_{\varepsilon }^{D}$ (all calculations are conducted on the level of the associated quadratic forms). However it is interesting to take a more close look on these operators. Let $u\in \mathrm{d}\mathrm{o}\mathrm{m}\left({\mathcal{H}}_{\varepsilon }^{{\ast}}\right)$ with ∗ ∈ {θ, N, D}. Then

  • for each $e\;{\in \mathcal{E}}_{Y}$ one has ue ∈ H 2(0, l(e)),
  • at the vertices from ${\mathcal{V}}_{Y}{{\backslash}\mathcal{U}}_{Y}\enspace u$ satisfies the same conditions as functions belonging to $\mathrm{d}\mathrm{o}\mathrm{m}\left({\mathcal{H}}_{\varepsilon }\right)$.

To describe the behaviour of u on ${\mathcal{U}}_{Y}$ we assume for simplicity that points of ${\mathcal{U}}_{Y}$ lie on the interior of edges of Γ (in fact, one can always choose a period cell in such a way that this assumption fulfills). This assumption on a period cell implies, in particular, that for any $v{\in \mathcal{U}}_{Y}$ there is only one edge of ${\mathcal{E}}_{Y}$ (we denote it ev ) emanating from v. Then we get the following boundary conditions at ${\mathcal{U}}_{Y}$:

  • $u\in \mathrm{d}\mathrm{o}\mathrm{m}\left({\mathcal{H}}_{\varepsilon }^{\theta }\right)$ satisfies θ-periodic conditions at $v\;{\in \mathcal{U}}_{Y}$:
    where $w{\in \mathcal{U}}_{Y}$ is such that w = iv for some $i\in {\mathbb{Z}}^{n}$ (one can show that for each $v{\in \mathcal{U}}_{Y}$ there exists a unique w with w = iv for some $i\in {\mathbb{Z}}^{n}$ provided the period cell is chosen as above),
  • $u\in \mathrm{d}\mathrm{o}\mathrm{m}\left({\mathcal{H}}_{\varepsilon }^{N}\right)$ satisfies Neumann conditions $\frac{\mathrm{d}{u}_{e}}{\mathrm{d}{\mathbf{x}}_{e}}\left(v\right)=0$ at $v\;{\in \mathcal{U}}_{Y}$,
  • $u\in \mathrm{d}\mathrm{o}\mathrm{m}\left({\mathcal{H}}_{\varepsilon }^{D}\right)$ satisfies Dirichlet conditions ue (v) = 0 at $v\;{\in \mathcal{U}}_{Y}$.Above ${\mathbf{x}}_{{e}_{v}}\in \left[0,l\left({e}_{v}\right)\right]$ is a natural coordinate on ev such that ${\mathbf{x}}_{{e}_{v}}=0$ at v; in the same way ${\mathbf{x}}_{{e}_{w}}$ is defined. The action of all operators above is given by (1.10).

2.2. Determination of Λ0

Recall, that ${\theta }_{a}\in {\mathbb{T}}^{n}$ is given in (2.5).

Lemma 2.1. There exist Λ0 > 0 and ɛΛ > 0 such that

Equation (2.6)

Proof. For $\theta \in {\mathbb{T}}^{n}$ and ɛ ⩾ 0 we introduce in L 2(Y) the form ${\mathbf{h}}_{\varepsilon }^{\theta }$,

We denote by ${\mathbf{H}}_{\varepsilon }^{\theta }$ the self-adjoint operator associated with this form. Obviously,

Equation (2.7)

Also we observe that with respect to the space decomposition ${\mathsf{L}}^{2}\left(Y\right)={\oplus }_{j=0}^{m}{\mathsf{L}}^{2}\left({Y}_{j}\right)$ the operator ${\mathbf{H}}_{0}^{\theta }$ can be decomposed in a sum

Equation (2.8)

where the operators ${\mathbf{H}}_{0,0}^{\theta }$, ${\mathbf{H}}_{0,j}^{N}$ are associated with the forms ${\mathbf{h}}_{0,0}^{\theta }$, ${\mathbf{h}}_{0,j}^{N}$ defined as follows,

Equation (2.9)

Equation (2.10)

It is easy to see that

Equation (2.11)

and the corresponding eigenspace consists of constant functions. Due to the connectivity of Yj one has

Equation (2.12)

If ${\lambda }_{1}\left({\mathbf{H}}_{0,0}^{\theta }\right)=0$, the corresponding eigenfunction would be constant which is possible iff θ = θp. Thus

Equation (2.13)

It follows from (2.8), (2.11)–(2.13) that ${\lambda }_{k}\left({\mathbf{H}}_{0}^{\theta }\right)=0$ for k = 1, ..., m, while

Equation (2.14)

Using the fact that the sequence of forms ${\mathbf{h}}_{\varepsilon }^{\theta }$ increases monotonically as ɛ decreases, and moreover ${\mathrm{lim}}_{\varepsilon \to 0}\enspace {\mathbf{h}}_{\varepsilon }^{\theta }\langle u,u\rangle ={\mathbf{h}}_{0}^{\theta }\langle u,u\rangle $, $\forall \enspace u\in \mathrm{d}\mathrm{o}\mathrm{m}\left({\mathbf{h}}_{\varepsilon }^{\theta }\right)=\mathrm{d}\mathrm{o}\mathrm{m}\left({\mathbf{h}}_{0}^{\theta }\right)$ we conclude [40, theorem 4.1]:

Equation (2.15)

Moreover, since the sequence of resolvents ${\left({\mathbf{H}}_{\varepsilon }^{\theta }+\mathrm{I}\right)}^{-1}$ decrease monotonically as ɛ decreases, and both resolvents ${\left({\mathbf{H}}_{\varepsilon }^{\theta }+\mathrm{I}\right)}^{-1}$ and ${\left({\mathbf{H}}_{0}^{\theta }+\mathrm{I}\right)}^{-1}$ are compact one can upgrade (2.15) to the norm resolvent convergence [16, theorem 8.3.5]. As a consequence we get the convergence of spectra, namely

Equation (2.16)

We set

Equation (2.17)

Since θaθp, then Λ0 > 0. It follows from (2.16) that there exists ɛΛ > 0 such that

Equation (2.18)

Combining (2.7) and (2.18) we arrive at the desired estimate (2.6). The lemma is proven.□

2.3. Comparison of eigenvalues

Here we recall a result from [12] serving to compare eigenvalues of two operators acting in different Hilbert spaces. Let H and H' be separable Hilbert spaces, $\mathcal{H}$ and ${\mathcal{H}}^{\prime }$ be non-negative self-adjoint operators in these spaces, and $\mathfrak{h}$ and ${\mathfrak{h}}^{\prime }$ be the associated quadratic forms. We assume that both operators $\mathcal{H}$ and ${\mathcal{H}}^{\prime }$ have purely discrete spectra.

Lemma 2.2. [12, lemma 2.1] Suppose that ${\Phi}:\mathrm{d}\mathrm{o}\mathrm{m}\left(\mathfrak{h}\right)\to \mathrm{d}\mathrm{o}\mathrm{m}\left({\mathfrak{h}}^{\prime }\right)$ is a linear map such that

for all $u\in \mathrm{d}\mathrm{o}\mathrm{m}\left(\mathfrak{h}\right)$. Here δ1, δ2 are some positive constants. Then for each $j\in \mathbb{N}$ we have

Equation (2.19)

provided the denominator $1-\left(1+{\lambda }_{k}\left(\mathcal{H}\right)\right){\delta }_{1}$ is positive.

Remark 2.2. The above result was established in [12] under the assumption that dim H = dim H' = , however, it is easy to see from its proof that the result remains valid for dim H' < as well. In that case (2.19) holds for j ∈ {1, ..., dim H'}.

2.4. Estimates on ${\lambda }_{k}\left({\mathcal{H}}_{\varepsilon }^{N}\right)$ and ${\lambda }_{k}\left({\mathcal{H}}_{\varepsilon }^{{\theta }_{p}}\right)$

In this subsection we denote by bold letters (e.g., u) the elements of ${\mathbb{C}}^{m+1}$. Their entries will be enumerated starting from zero, i.e.

Let ${\mathbb{C}}_{l}^{m+1}$ be the same space ${\mathbb{C}}^{m+1}$ equipped with the weighted scalar product

Equation (2.20)

[recall that lj and Nj are defined by (1.11)]. Note that ${\mathbb{C}}_{l}^{m+1}$ is isomorphic to a subspace of L 2(Y) consisting of functions being constant on each Yj , j = 0, ..., m. In ${\mathbb{C}}_{l}^{m+1}$ we introduce the form 3

Equation (2.21)

This form is associated with the operator ${\mathcal{H}}_{0}^{N}$ in ${\mathbb{C}}_{l}^{m+1}$ being given by the symmetric [with respect to the scalar product (2.20)] matrix

We denote by ${\lambda }_{1}\left({\mathcal{H}}_{0}^{N}\right){\leqslant}{\lambda }_{2}\left({\mathcal{H}}_{0}^{N}\right){\leqslant}\dots {\leqslant}{\lambda }_{m+1}\left({\mathcal{H}}_{0}^{N}\right)$ its eigenvalues. It turns out that

Equation (2.22)

Indeed, let λ be the eigenvalue of ${\mathcal{H}}_{0}^{N}$ such that λ ∉ {A1, A2, ..., Am }, and let 0 ≠ u = (u0, ..., um ) be the corresponding eigenfunction. The equation ${\mathcal{H}}_{0}^{N}\mathbf{u}=\lambda \mathbf{u}$ is a linear algebraic system for u0, ..., um . From the last m equations of this system we infer

Equation (2.23)

Note, that the denominator in (2.23) is non-zero since $\lambda \ne {A}_{j}={\alpha }_{j}{\beta }_{j}^{2}{N}_{j}{l}_{j}^{-1}$. Inserting (2.23) into the first equation of the system we arrive at

Moreover, u0 ≠ 0 [otherwise, due to (2.23), u would vanish]. Hence λ is a root of equation (1.14). Evidently, the converse assertion also holds, that is

Equation (2.24)

Then (2.22) follows immediately from (1.15) and (2.24).

Lemma 2.3. There exist constants CB > 0 and ɛB > 0 such that

Equation (2.25)

Proof. W.l.o.g. we may assume that αj and γ are non-negative. Evidently, under this assumption the operators ${\mathcal{H}}_{\varepsilon }^{N}$ are non-negative. Consequently, the operator ${\mathcal{H}}_{0}^{N}$ is also non-negative, see footnote 3 . Thus we are in the framework of lemma 2.2. In the general case we have to consider the shifted operators ${\mathcal{H}}_{\varepsilon }^{N}-\mu \mathrm{I}$ and ${\mathcal{H}}_{0}^{N}-\mu \mathrm{I}$, where μ is the smallest eigenvalues of ${\mathcal{H}}_{\varepsilon }^{N}{\vert }_{\varepsilon =1}$ (this eigenvalue could be indeed negative if one of the numbers αj and γ is negative). The operator ${\mathcal{H}}_{\varepsilon }^{N}-\mu \mathrm{I}$ is non-negative for each ɛ ∈ (0, 1] due to the fact that the sequence of forms ${\mathfrak{h}}_{\varepsilon }^{N}$ increases monotonically as ɛ decreases; the non-negativity of ${\mathcal{H}}_{0}^{N}-\mu \mathrm{I}$ is again due to footnote 3 .

We introduce the operator ${\Phi}:\mathrm{d}\mathrm{o}\mathrm{m}\left({\mathfrak{h}}_{\varepsilon }^{N}\right)\to {\mathbb{C}}_{l}^{m+1}$ by

Equation (2.26)

Our goal is to show that the following estimates hold for each $u\in \mathrm{d}\mathrm{o}\mathrm{m}\left({\mathfrak{h}}_{\varepsilon }^{N}\right)$:

Equation (2.27)

Equation (2.28)

with some C1, C2 > 0. By lemma 2.2 (see also remark 2.2 after it) we infer from (2.27) and (2.28) that

Equation (2.29)

provided $\left(1+{\lambda }_{j}\left({\mathcal{H}}_{\varepsilon }^{N}\right)\right){C}_{1}\varepsilon {< }1$. Set

Since $0{\leqslant}{\lambda }_{j}\left({\mathcal{H}}_{\varepsilon }^{N}\right){\leqslant}{B}_{j-1}$ [the last estimate follows from (2.4) and lemma 2.4 below] and Bj−1Bm for j = 1, ..., m + 1, the denominator in (2.29) is larger than 1/2 for ɛ < ɛB . Moreover, since ɛB ⩽ 1, one has ɛ < ɛ1/2 for ɛ < ɛB . Taking all above into account we deduce from (2.29):

Thus estimate (2.25) holds for ɛ < ɛB with ${C}_{B}=2\left({B}_{m}\left(1+{B}_{m}\right){C}_{1}+\left(1+{B}_{m}\right){C}_{2}\right)$.

To prove (2.27) we need a Poincaré-type inequality on each Yj . Namely, let the form ${\mathbf{h}}_{0,j}^{N}$ be defined by (2.10), j = 1, ..., m; in the same way we define ${\mathbf{h}}_{0,j}^{N}$ for j = 0. By ${\mathbf{H}}_{0,j}^{N}$ we denote the associated operators in L 2(Yj ), j = 0, ..., m. One has ${\lambda }_{1}\left({\mathbf{H}}_{0,j}^{N}\right)=0$ (the corresponding eigenspace consists of constants), while ${\lambda }_{2}\left({\mathbf{H}}_{0,j}^{N}\right){ >}0$. By the max–min principle [38] ${\lambda }_{2}\left({\mathbf{H}}_{0,j}^{N}\right){\leqslant}{\mathbf{h}}_{0,j}^{N}\langle v,v\rangle /{\Vert}v{{\Vert}}_{{\mathsf{L}}^{2}\left(Y\right)}^{2}$ for each $v\in \mathrm{d}\mathrm{o}\mathrm{m}\left({\mathbf{h}}_{0,j}^{N}\right)$ such that ${\left(v,1\right)}_{{\mathsf{L}}^{2}\left({Y}_{j}\right)}=0$. Using the above estimate for v := u − (Φu)j we get

Equation (2.30)

where ${C}_{1}={\left({\lambda }_{2}\left({\mathbf{H}}_{0,j}\right)\right)}^{-1}$. Using (2.30) we obtain

(on the penultimate step we use the fact that αj and γ are non-negative). Inequality (2.27) is checked.

Now let us prove the estimate (2.28). One has:

Equation (2.31)

We estimate the remainder Rɛ as follows (below we use the estimate $\vert a{\vert }^{2}-\vert b{\vert }^{2}{\leqslant}\vert a-b\vert \left(\vert a\vert +\vert b\vert \right)$):

Equation (2.32)

To proceed further we need a standard trace estimate

Equation (2.33)

where $\tilde {C}{ >}0$ depends on Y. Applying it for $w{:=}u{\upharpoonright }_{{Y}_{j}}-{\left({\Phi}u\right)}_{j}$ and then using (2.30) we obtain

Equation (2.34)

Also, using the Cauchy–Schwarz inequality and (2.33) and taking into account that ɛ ⩽ 1, one gets

Equation (2.35)

Equation (2.36)

Combining (2.32), (2.34)–(2.36) we arrive at the estimate

Equation (2.37)

with some constant C2 depending on αj , βj , γ, Y. The required estimate (2.28) follows from (2.31), (2.37); this ends the proof of lemma 2.3. □

Lemma 2.4. One has:

Proof. By the min–max principle [7, section 4.5] we have

Equation (2.38)

where ${\mathfrak{H}}^{j}$ is a set of all j-dimensional subspaces in $\mathrm{d}\mathrm{o}\mathrm{m}\left({\mathfrak{h}}_{\varepsilon }^{{\theta }_{\mathrm{p}}}\right)$.

We introduce the operator ${\Psi}:{\mathbb{C}}_{l}^{m+1}\to {\mathsf{L}}^{2}\left(Y\right)$ by

Equation (2.39)

It is easy to see that the image of Ψ is contained in $\mathrm{d}\mathrm{o}\mathrm{m}\left({\mathfrak{h}}_{\varepsilon }^{{\theta }_{\mathrm{p}}}\right)$, and

Equation (2.40)

[recall, that the form ${\mathfrak{h}}_{0}^{N}$ is given by (2.21), by ${\mathcal{H}}_{0}^{N}$ we denote the associated operator].

Let {e1, e2, ..., em+1} be an orthonormal system of eigenvectors of ${\mathcal{H}}_{0}^{N}$ such that ${\mathcal{H}}_{0}^{N}{\mathbf{e}}^{j}={B}_{j-1}{\mathbf{e}}^{j}$ [see (2.22)]. For j = 1, ..., m + 1 we set Wj := span(e1, ..., ej ). It is easy to see that

Equation (2.41)

Finally, we set Vj := ΨWj , obviously ${V}^{j}\in {\mathfrak{H}}^{j}$. Then using (2.38)–(2.41) we obtain:

The lemma is proven. □

2.5. Estimates on ${\lambda }_{k}\left({\mathcal{H}}_{\varepsilon }^{{\theta }_{\mathrm{a}}}\right)$ and ${\lambda }_{k}\left({\mathcal{H}}_{\varepsilon }^{D}\right)$

Let ${\mathbb{C}}_{l}^{m}$ be the subspace of ${\mathbb{C}}^{m+1}$ consisting of vectors of the form u = (0, u1, ..., um ) with ${u}_{j}\in \mathbb{C}$ with the scalar product generated by (2.20), i.e.

In this space we introduce the quadratic form

It is easy to see that ${\mathfrak{h}}_{0}^{{\theta }_{\mathrm{a}}}={\mathfrak{h}}_{0}^{N}{\upharpoonright }_{{\mathbb{C}}_{l}^{m}}$. The operator associated with this form is given by the matrix

Evidently, the eigenvalues of this matrix are the numbers A1 < A2 < ... < Am .

Lemma 2.5. There exist such constants CA > 0 and ɛA > 0 that

Equation (2.42)

Proof. The proof is similar to the proof of lemma 2.3. There is only one essential difference: instead of the operator Φ (2.26) one should use the operator ${{\Phi}}_{0}:\mathrm{d}\mathrm{o}\mathrm{m}\left({\mathfrak{h}}_{\varepsilon }^{{\theta }_{\mathrm{a}}}\right)\to {\mathbb{C}}_{l}^{m}$ defined by

and, as a consequence, instead of the Poincaré inequality (2.30) on Y0 one should use the inequality

where ${C}_{1}={\left({\lambda }_{1}\left({\mathbf{H}}_{0,0}^{{\theta }_{\mathrm{a}}}\right)\right)}^{-1}$ [recall, that the operator ${\mathbf{H}}_{0,0}^{\theta }$ is introduced in the proof of lemma 2.1, and its first eigenvalue is non-zero provided θθp).□

Lemma 2.6. One has:

Proof. The proof is similar to the proof of lemma 2.4. Namely, one has to replace everywhere in the proof of lemma 2.4 the superscript θp by D, the superscript N by θa, Bj−1 by Aj , and to use instead of the mapping Ψ (2.39) its restriction to ${\mathbb{C}}_{l}^{m}$ (the image of this restriction is contained in $\mathrm{d}\mathrm{o}\mathrm{m}\left({\mathfrak{h}}_{\varepsilon }^{D}\right)$). □

2.6. Proof of theorem 1.1

It follows from (2.2), (2.4) and lemmata 2.3 and 2.4 that

Equation (2.43)

Similarly, using (2.2), (2.4) and lemmata 2.5 and 2.6 we get

Equation (2.44)

Finally, we infer from (2.2) and lemma 2.1 that

Equation (2.45)

Set

Equation (2.46)

Combining (1.15), (2.43)–(2.46) we conclude that there exists such ɛ0 > 0 that properties (1.16)–(1.18) hold for ɛ < ɛ0, Λ0 being defined by (2.17), CA being defined in lemma 2.5, CB being defined in lemma 2.3. Evidently, Λ0 depends only on Y, while ɛ0, CA , CB depend also on αj , βj , γ. theorem 1.1 is proven.

Remark 2.3. The proof of theorem 1.1 relies, in particular, on some properties of the eigenvalues of the operator ${\mathcal{H}}_{\varepsilon }^{{\theta }_{\mathrm{a}}}$—see the estimates (2.6), (2.42). In fact, the only specific property of θa we use is that θaθp. Thus, instead of ${\mathcal{H}}_{\varepsilon }^{{\theta }_{\mathrm{a}}}$ one can utilize any other ${\mathcal{H}}_{\varepsilon }^{\theta }$ with θθp—the above estimates are still valid for its eigenvalues (but, of course, with another constants Λ, ɛΛ, CA , ɛA ).

3. Control over the endpoints of spectral gaps

Our first goal is to show that under a suitable choice of coupling constants αj , βj , γ the numbers Aj , Bj (cf theorem 1.1) coincide with prescribed ones.

Throughout this section we will use the notation ${\mathcal{H}}_{\varepsilon }\left[\alpha ,\beta ,\gamma \right]$ for the operator ${\mathcal{H}}_{\varepsilon }$ defined in subsection 1.4 [recall that this operator is associated with the form given by (1.5)]; here let $\alpha =\left({\alpha }_{1},\dots ,{\alpha }_{m}\right)\in {\mathbb{R}}^{m}$, $\beta =\left({\beta }_{1},\dots ,{\beta }_{m}\right)\in {\mathbb{R}}^{m}$, $\gamma \in \mathbb{R}$ be such that αj ≠ 0, βj ≠ 0 and, moreover, (1.13) holds (so, we are in the framework of theorem 1.1). For the numbers Aj and Bj defined by (1.12), (1.14), (1.15) we will use the notations Aj [α, β, γ] and Bj [α, β, γ], respectively.

Theorem 3.1. Let ${\tilde {A}}_{j}$, j = 1, ..., m and ${\tilde {B}}_{j}$, j = 0, ..., m be arbitrary numbers satisfying

Equation (3.1)

We set

Equation (3.2)

where ${\tilde {r}}_{j}$, j = 1, ..., m is defined by

Equation (3.3)

Then

Remark 3.1. The quantity standing under the symbol of square root in (3.2) is indeed positive. This follows easily from (3.1) (the crucial observation: one has $\mathrm{sign}\left({\tilde {B}}_{i}-{\tilde {A}}_{j}\right)=\mathrm{sign}\left({\tilde {A}}_{i}-{\tilde {A}}_{j}\right)\ne 0$ as ij).

Proof of theorem 3.1. The equality ${A}_{j}\left[{\tilde {\alpha }}_{j},{\tilde {\beta }}_{j},\tilde {\gamma }\right]={\tilde {A}}_{j}$, j = 1, ..., m is straightforward—one just needs to insert ${\tilde {\alpha }}_{j}$ and ${\tilde {\beta }}_{j}$ defined by (3.2) into the definition of the numbers ${A}_{j}\left[{\tilde {\alpha }}_{j},{\tilde {\beta }}_{j},\tilde {\gamma }\right]$ (1.12).

Now, let us prove that ${B}_{j}\left[{\tilde {\alpha }}_{j},{\tilde {\beta }}_{j},\tilde {\gamma }\right]={\tilde {B}}_{j}$ as j = 0, ..., m. For this purpose, we consider the following system of linear algebraic equations (for unknown $z=\left({z}_{1},\enspace {z}_{2},\enspace ,\dots ,\enspace {z}_{m}\right)\in {\mathbb{C}}^{m}$):

It was proven in [17] that $z=\left({\tilde {r}}_{1},\dots ,{\tilde {r}}_{m}\right)$ with ${\tilde {r}}_{j}$ being defined by (3.3) is the solution to this system. Thus for j = 1, ..., m one has $\sum _{i=1}^{m}{\tilde {A}}_{i}{\left({\tilde {A}}_{i}-{\tilde {B}}_{j}\right)}^{-1}{\tilde {r}}_{i}=-1$ or, equivalently,

Equation (3.4)

It is straightforward to check that (3.4) implies 4

Equation (3.5)

Using ${A}_{j}\left[{\tilde {\alpha }}_{j},{\tilde {\beta }}_{j},\tilde {\gamma }\right]={\tilde {A}}_{j}$ we conclude from (3.5) that ${\tilde {B}}_{j}$, j = 0, ..., m are the roots of (1.14) in which ${\alpha }_{j}={\tilde {\alpha }}_{j}$, ${\beta }_{j}={\tilde {\beta }}_{j}$, $\gamma =\tilde {\gamma }$ are set. Hence ${\tilde {B}}_{j}={B}_{j}\left[{\tilde {\alpha }}_{j},{\tilde {\beta }}_{j},\tilde {\gamma }\right]$ as j = 0, ..., m. Theorem 3.1 is proven.□

Theorems 1.1, 3.1 yield that for all ɛ < ɛ0 $\sigma \left({\mathcal{H}}_{\varepsilon }\left[\tilde {\alpha },\tilde {\beta },\tilde {\gamma }\right]\right)$ has m gaps within (−, Λ0 ɛ−1], moreover the endpoints of these m gaps and the bottom of the spectrum converge to prescribed numbers as ɛ → 0. Our next goal is to improve this result: we show that under a proper choice of αj one can ensure the precise coincidence of the left endpoints of the spectral gaps of ${\mathcal{H}}_{\varepsilon }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]$ with prescribed numbers.

Theorem 3.2. Let ${\tilde {A}}_{j}$, j = 1, ..., m and ${\tilde {B}}_{j}$, j = 0, ..., m be arbitrary numbers satisfying (3.1), and let ${\tilde {\beta }}_{j}$, $\tilde {\gamma }$ be defined by (3.2). Then there exists such $\tilde {\varepsilon }{ >}0$ and C0 > 0 that

where Λ0 is defined by (2.17), ${B}_{0,\varepsilon }{< }{\tilde {A}}_{1}{< }{B}_{1,\varepsilon }{< }{\tilde {A}}_{2}{< }{B}_{2,\varepsilon }{< }\dots {< }{\tilde {A}}_{m}{< }{B}_{m,\varepsilon }{< }{{\Lambda}}_{0}{\varepsilon }^{-1}$, moreover

The proof of theorem 3.2 is based on the following multi-dimensional version of the intermediate value theorem established in [15].

Lemma 3.3. [15, lemma 3.5] Let $\mathcal{D}={{\Pi}}_{k=1}^{m}\left[{a}_{k},{b}_{k}\right]$ with ak < bk , k = 1, ..., m, and suppose we are given a continuous function $F:\mathcal{D}\to {\mathbb{R}}^{m}$ such that each component Fk of F is monotonically increasing in each of its arguments. Let us suppose that ${F}_{k}^{-}{< }{F}_{k}^{+}$, i = 1, ..., m, where

Then for any ${F}^{{\ast}}\in {{\Pi}}_{k=1}^{m}\left[{F}_{k}^{-},{F}_{k}^{+}\right]$ there exists a point $x\in \mathcal{D}$ such that F(x) = F*.

Proof of theorem 3.2. Let δ > 0 and $\mathcal{D}{:=}{{\Pi}}_{k=1}^{m}\left[{\tilde {\alpha }}_{k}-\delta ,{\tilde {\alpha }}_{k}+\delta \right]$, where ${\tilde {\alpha }}_{1},\dots ,{\tilde {\alpha }}_{m}$ be defined by (3.2). We assume that δ is so small that

Equation (3.6)

This could be indeed achieved since (3.6) holds for $\alpha =\tilde {\alpha }$. Thus theorem 1.1 is applicable for each $\alpha \in \mathcal{D}$. Also, analyzing the proof of theorem 1.1, it is easy to see that the constants ɛ0, C0 in theorem 1.1 can be chosen the same for all $\alpha \in \mathcal{D}$; the proof of this fact relies on the compactness of $\mathcal{D}$. Hence there exists ɛ0 > 0 and C0 > 0 such that

Equation (3.7)

where Aj,ɛ , Bj,ɛ satisfy (1.17) and (1.18) (with ${A}_{j}\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]$, ${A}_{j}\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]$ instead of Aj and Bj ). Further, for these Aj,ɛ , Bj,ɛ we will use the notations ${A}_{j,\varepsilon }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]$, ${B}_{j,\varepsilon }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]$, respectively. We denote

It is easy to see that there exists such $\tilde {\varepsilon }\in \left(0,{\varepsilon }_{0}\right]$ that

Equation (3.8)

Indeed, since ${\alpha }_{j}^{-}{< }{\tilde {\alpha }}_{j}{< }{\alpha }_{j}^{+}$ and ${\tilde {A}}_{j}\text{Th. 3.1}{=}{A}_{j}\left[\tilde {\alpha },\tilde {\beta },\tilde {\gamma }\right]={\tilde {\alpha }}_{j}{\tilde {\beta }}_{j}^{2}{N}_{j}{l}_{j}^{-1}$, then

Equation (3.9)

where ${A}_{j}^{{\pm}}{:=}{A}_{j}\left[{\alpha }^{{\pm}},\tilde {\beta },\tilde {\gamma }\right]={\alpha }_{j}^{{\pm}}{\tilde {\beta }}_{j}^{2}{N}_{j}{l}_{j}^{-1}$. Moreover for ɛ < ɛ0 we have

Equation (3.10)

Property (3.8) follows immediately from (3.9) and (3.10).

Now, let us fix $\varepsilon \in \left(0,\tilde {\varepsilon }\right]$. We introduce the function $F=\left({F}_{1},\dots ,{F}_{m}\right):\mathcal{D}\to {\mathbb{R}}^{m}$ by

Equation (3.11)

The functions Fk are continuous. Indeed, let $\alpha ,{\alpha }^{\prime }\in \mathcal{D}$. To simplify the presentation we assume that ${\alpha }_{j},{\alpha }_{j}^{\prime },\tilde {\gamma }{\geqslant}0$ (and consequently ${\mathcal{H}}_{\varepsilon }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]{\geqslant}0$, ${\mathcal{H}}_{\varepsilon }\left[{\alpha }^{\prime },\tilde {\beta },\tilde {\gamma }\right]{\geqslant}0$); general case need slight modifications. By virtue of (1.6) one has for f, g ∈ L 2(Γ),

Equation (3.12)

where $u={\left({\mathcal{H}}_{\varepsilon }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]+\mathrm{I}\right)}^{-1}f$, $w={\left({\mathcal{H}}_{\varepsilon }\left[{\alpha }^{\prime },\tilde {\beta },\tilde {\gamma }\right]+\mathrm{I}\right)}^{-1}g$, ${\mathfrak{h}}_{\varepsilon }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]$ is a form associated with ${\mathcal{H}}_{\varepsilon }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]$. Using (2.33) and taking into account that ${\alpha }_{j},{\alpha }_{j}^{\prime },\tilde {\gamma }{\geqslant}0$ we continue (3.12) as follows,

Equation (3.13)

where C > 0 is a constant. It follows from (3.13) that

whence for an arbitrary compact set $\mathcal{I}\subset \mathbb{R}$ one has

Equation (3.14)

where distH(⋅, ⋅) stands for the Hausdorff distance. Taking into account a special structure of $\sigma \left({\mathcal{H}}_{\varepsilon }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]\right)$ (3.7) we conclude from (3.14) that ${A}_{k,\varepsilon }\left[{\alpha }^{\prime },\tilde {\beta },\tilde {\gamma }\right]-{A}_{k,\varepsilon }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]\to 0\quad \;\text{as}\;\enspace \alpha -{\alpha }^{\prime }\to 0$, i.e. Fk is continuous. The number ${A}_{k,\varepsilon }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]$ is the right endpoint of the kth spectral band:

Equation (3.15)

where ${\mathcal{H}}_{\varepsilon }^{\theta }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]$ denotes the operator ${\mathcal{H}}_{\varepsilon }^{\theta }$ with ${\beta }_{j}={\tilde {\beta }}_{j}$, $\gamma =\tilde {\gamma }$. Since ${\mathcal{H}}_{\varepsilon }^{\theta }\left[\alpha ,\tilde {\beta },\tilde {\gamma }\right]{\leqslant}{\mathcal{H}}_{\varepsilon }^{\theta }\left[{\alpha }^{\prime },\tilde {\beta },\tilde {\gamma }\right]$ (in the form sense) as αj α'j , ∀ j = 1, ..., m, by min–max principle we conclude for k = 1, ..., m:

Equation (3.16)

It follows from (3.15) and (3.16) that the functions Fk increase monotonically in each of their arguments. Taking into account (3.8) we infer that the function F satisfy all the requirements of lemma 3.3. Applying this lemma we conclude that there exists such $\alpha \in \mathcal{D}$ that

Equation (3.17)

Combining (3.7), (3.11), (3.17) we arrive at the statement of theorem 3.2.□

Remark 3.2. The assumption ${\tilde {A}}_{j}\ne 0$, j = 1, ..., m in (3.1) is essential—one cannot avoid it when using the Hamiltonians ${\mathcal{H}}_{\varepsilon }$ introduced in subsection 1.4, since the numbers Aj (1.12) are always non-zero. To overcome this restriction one can add to ${\mathcal{H}}_{\varepsilon }$ a constant potential, which shift the spectrum accordingly. Another option is to to pick in each Yj , j = 0, ..., m an internal point ${\hat{v}}_{j}$, and then to add at ${\hat{v}}_{j}$ the δ-coupling of the strength $\hat{\gamma }\enspace {l}_{j}$, where lj is defined by (1.11) and $\hat{\gamma }\in \mathbb{R}$. Denote by ${\hat{\mathcal{H}}}_{\varepsilon }$ the modified Hamiltonian. Repeating verbatim the arguments we use in the proof of theorem 1.1 one can show that the spectrum of ${\hat{\mathcal{H}}}_{\varepsilon }$ satisfies (1.16)–(1.18), but with ${A}_{j}+\hat{\gamma }$ and ${B}_{j}+\hat{\gamma }$ instead of Aj and Bj .

Acknowledgments

The author is supported by Austrian Science Fund (FWF) under the Project M 2310-N32. Also he thanks the anonymous referees for useful comments which improved the paper considerably.

Footnotes

  • For the definition of δ and δ'-conditions in the graph context see, e.g., [9].

  • In order to achieve decomposition (1.3) and (1.4) we require our initial assumption on Γ that it does not coincide with a line. If Γ is a line, its fundamental domain Yr would be a compact interval; one can decompose it in such a way that properties (ii)–(v) hold, but then the set Y0 will be always disconnected.

  • It is easy to see that for any ɛ > 0 the form ${\mathfrak{h}}_{0}^{N}$ is the restriction of the form ${\mathfrak{h}}_{\varepsilon }^{N}$ onto a subspace of L 2(Y) consisting of functions being constant on each Yj , j = 0, ..., m (as we already noticed this subspace is isomorphic to ${\mathbb{C}}_{l}^{m+1}$).

  • Indeed, inserting into (3.5) ${\tilde {\beta }}_{j}$ and $\tilde {\gamma }$ defined by (3.2) and then performing simple calculations one arrives at (3.4). Using these calculations in the reverse order one gets the required implication (3.4) ⇒ (3.5).

Please wait… references are loading.
10.1088/1751-8121/aba98b