This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Brought to you by:
Paper

Basis-independent spectral methods for non-linear optical response in arbitrary tight-binding models

and

Published 19 December 2019 © 2019 IOP Publishing Ltd
, , Citation S M João and J M Viana Parente Lopes 2020 J. Phys.: Condens. Matter 32 125901 DOI 10.1088/1361-648X/ab59ec

0953-8984/32/12/125901

Abstract

In this paper, we developed a basis-independent perturbative method for calculating the non-linear optical response of arbitrary non-interacting tight-binding models. Our method is based on the non-equilibrium Keldysh formalism and allows an efficient numerical implementation within the framework of the kernel polynomial method for systems which are not required to be translation-invariant. Some proof-of-concept results of the second-order optical conductivity are presented for the special case of gapped graphene with vacancies and an on-site Anderson disordered potential.

Export citation and abstract BibTeX RIS

1. Introduction

Since the advent of the laser in 1960, the field of non-linear optics has received considerable interest. The following year marked the beginning of a systematic study of this field, as in 1961 Franken was able to demonstrate second harmonic generation (SHG) [1] experimentally. This opened the gateway to a whole new plethora of phenomena, such as optical rectification (1962) [2] and higher harmonic generation (1967) [3]. The laser was able to provide the powerful electric fields needed to see these nonlinear phenomena. The optical properties of crystals have been studied extensively throughout the last 30 years [4] and have recently received considerable interest due to the strong non-linear properties of layered materials such as graphene [57].

Several approaches have been developed to obtain the non-linear response up to arbitrary order of a crystalline system subject to an external field. Among those, we may find generalizations of Kubo's formula for higher orders and perturbation expansions for the density matrix in both the length gauge and the velocity gauge [4, 8]. Due to the complexity of the problem, the bulk of that work was done using translation-invariant systems. Despite covering a broad range of systems, this approach is limited in its scope, as it cannot deal with impurities, border effects, or even magnetic fields within the usual Peierls framework [9]. In order to simulate more realistic systems, this restriction has to be lifted. The goal is then to re-express the same formulas for the linear and nonlinear optical conductivities in a framework that allows real-space calculations. In this work, we extend the work of Passos et al [10] implementing the velocity gauge methodology within the Keldysh formalism [11]. This way, we develop a general perturbation procedure to deal with non-interacting fermion systems at finite temperature coupled to a time-dependent external field. In [12], an advanced Chebyshev expansion method is proposed to compute linear response functions. This paper comes as the next logical step, by providing the generalization to all orders in perturbation theory.

Through careful categorization of all these contributions, we provide a systematic procedure to find the objects needed to calculate the conductivity at any order. These objects are expressed with no reference to a specific basis. The critical point here is that the mathematical objects provided by our perturbation expansion are precisely the ones required by the numerical spectral methods we use. This fact, combined with our diagrammatic approach, provides a straightforward way to implement the numerical calculation of the nonlinear optical conductivity for a wide range of materials. This methodology is implemented in KITE, an open-source software developed by ourselves [13] to calculate the nonlinear optical properties of a very broad range of tight-binding models.

This paper is structured as follows. In section 2, we introduce the Keldysh formalism to show how the current will be calculated and define the fundamental mathematical objects of our calculations. Section 3 applies the perturbation procedure to a tight-binding model. The electromagnetic field is added through the Peierls Substitution. Then, a diagrammatic procedure is developed in order to deal with the large number of non-trivial contributions to the conductivity in each order. Section 4 explains how to use a Chebyshev expansion of the operators in the previous expression in order to be used with spectral methods. This provides the full formula that may be directly implemented numerically. Finally, in section 5 we apply the formalism developed in the previous sections to several different systems. We start by comparing with known results obtained by $\boldsymbol{k}$ -space integration [8, 10] of the nonlinear optical conductivity of gapped graphene. We focus solely on the intrinsic contribution of the crystal to the optical conductivity, disregarding any excitonic effects. Then, we add some disorder to the system and show that our method gives the expected result. Finally, the numerical efficiency and convergence of our method is assessed.

2. Keldysh formalism

The Keldysh formalism [11] is a general perturbation scheme describing the quantum mechanical time evolution of non-equilibrium interacting systems at finite temperature. It provides a concise diagrammatic representation of the average values of quantum operators. This formalism does not rely on any particular basis, which is a critical feature for this paper. The mathematical objects obtained in our expansion are in the form required by numerical spectral methods, providing a straightforward formula for the numerical calculation of nonlinear optical response functions. In this section we will introduce the definitions of the objects used throughout the paper and show how to expand the Green's functions for fermions [14] with this formalism.

2.1. Definitions

2.1.1. Green's functions.

To use the Keldysh formalism for fermions, we need the definitions of the time-ordered, lesser, greater and anti-time-ordered Green's functions. Respectively,

Equation (1)

Equation (2)

Equation (3)

Equation (4)

All the creation and annihilation operators are in the Heisenberg picture and the labels a and b denote states belonging to a complete single-particle basis. T is the time-ordering operator and $\tilde{T}$ the anti-time-ordering operator. The average $\left\langle \cdots\right\rangle $ stands for ${{\rm Tr}}\left[\rho(t_{0})\cdots\right]/{{\rm Tr}}\left[\rho(t_{0})\right]$ in the grand canonical ensemble, $\rho$ is the density matrix and t0 denotes the time at which the external perturbation has been switched on. These are the building blocks of the Keldysh formalism. The advanced and retarded Green's functions are a simple combination of the previous objects:

Equation (5)

Equation (6)

The non-perturbed versions of these Green's functions are denoted by a lowercase g.

2.1.2. Expected value of an operator.

The expected value of the current $\boldsymbol{J}(t)$ (or any one-particle operator) may be evaluated with resort to these Green's functions by tracing over its product with the perturbed lesser Green's function:

Equation (7)

The Fourier transform1 of $\boldsymbol{J}(t)$ is shown diagrammatically in figure 1. The circles stand for the full, perturbed operators in the presence of an external field.

Figure 1.

Figure 1. Diagrammatic representation of the expected value of the current operator in Fourier space. The horizontal straight line ending in a circle is the lesser Green's function and the wavy line beginning in a circle represents the current operator.

Standard image High-resolution image

2.1.3. Conductivity.

We use the same definition for the nonlinear optical conductivity as in [8, 10]:

Equation (8)

where $E^{\alpha}$ is the component of the electric field along the $\alpha$ direction and the repeated indices are assumed to be summed over. The coefficients of this expansion are the conductivities at each order in the expansion. The next section is devoted to finding the perturbation expansion of G<. In this paper we are dealing with tight-binding models, in which case the current operator will itself be a power series of the external field.

2.2. Non-interacting electronic systems

Our system is described by the many-particle time-dependent Hamiltonian

Equation (9)

where $\mathcal{H}_{0}$ is an Hamiltonian that we can solve exactly and $\mathcal{H}_{{\rm ext}}(t)$ is the time-dependent external perturbation. Here we restrict ourselves to non-interacting Hamiltonians since we're dealing with non-interacting electrons. These operators are expressed in terms of their single-particle counterparts as

Equation (10)

Equation (11)

The expansion of the perturbed lesser Green's function G< will be expressed in terms of the unperturbed Green's functions g>, g<, gR and gA in Fourier space:

Equation (12)

Equation (13)

Equation (14)

Equation (15)

where $ \newcommand{\e}{{\rm e}} f\left(\epsilon\right)=\left(1+{\rm e}^{\beta(\epsilon-\mu)}\right){}^{-1}$ is the Fermi–Dirac distribution, $\beta$ is the inverse temperature and $\mu$ is the chemical potential. The Keldysh formalism and Langreth's rules provide the perturbation expansion of G< [15]. Defining $V(t)=\left({\rm i}\hbar\right){}^{-1}H_{{\rm ext}}(t)$ , the zeroth-order term in the expansion is

Equation (16)

and the first-order one is

Equation (17)

$\int{{\rm d}}^{n}\omega_{1\cdots n}$ is a shorthand for $\int\cdots\int{{\rm d}}\omega_{1}\cdots{{\rm d}}\omega_{n}$ . The second-order term is

Equation (18)

Diagrammatically, the expansion of ${\rm i}G^{<}\left(\omega\right)$ is represented by figure 2. Each wavy line ending in a circle represents an external perturbation $\tilde{V}$ . There are three different types of Green's functions that may appear in these expansions, with a certain regularity: a lesser Green's function $\tilde{g}^{<}$ , which is always present, retarded Green's functions $\tilde{g}^{R}$ and advanced Green's functions $\tilde{g}^{A}$ . Diagrammatically, $\tilde{g}^{<}$ is represented by a dashed line while the solid lines represent retarded or advanced Green's functions. To identify whether a line represents a retarded or advanced Green's function, one needs to read the diagram and identify the position of the lesser Green's function and the outgoing line. Reading clockwise (anti-clockwise) until finding the outgoing line, there can only be advanced (retarded) Green's functions. In each intersection, the corresponding external perturbation $\tilde{V}$ is inserted. An exception is made for the intersection with the line representing $\omega$ , as it still needs to be contracted.

Figure 2.

Figure 2. Diagrammatic representation of the lesser Green's function.

Standard image High-resolution image

If the external perturbation were a simple external field $\boldsymbol{E}(t)$ , then the coupling would be $H_{{\rm ext}}(t)=e\boldsymbol{E}(t)\cdot\boldsymbol{r}$ and the previous expressions coupled with equation (7) would suffice. Now we will turn to tight-binding Hamiltonians, for which the external coupling is actually an infinite series of operators due to the way the electromagnetic field is introduced. This affects not only the $V$ operators but also the expression for the current operator.

3. Tight-binding Hamiltonian with external electric field

Tight-binding models provide a simple framework with which to calculate transport quantities. This framework can be used to express structural disorder in the system, whilst Peierls' substitution [9] adds an electromagnetic field as an external perturbation. Despite the simplicity of this procedure, the addition of an electromagnetic field through a phase factor yields an infinite series of $H_{{\rm ext}}$ . In this section, we obtain the expression for $H_{{\rm ext}}$ and show how the expansions of the previous sections may be used to obtain the nonlinear optical conductivity. This is entirely analogous to the way the external perturbation is introduced with the velocity gauge in the work of Passos et al [10].

3.1. Series expansion

Let us consider the following tight-binding Hamiltonian:

Equation (19)

The $\boldsymbol{R}_{i}$ represent the lattice sites and the $\sigma_{i}$ the other degrees of freedom unrelated to the position, such as the orbitals and spin. The electromagnetic field is introduced through Peierls' substitution:

Equation (20)

To introduce both a static magnetic field and a uniform electric field, we use the following vector potential:

Equation (21)

The electric and magnetic fields are obtained from $\boldsymbol{E}(t)=-\partial_{t}\boldsymbol{A}_{2}(t)$ and $\boldsymbol{B}(\boldsymbol{r})=\boldsymbol{\nabla}\times\boldsymbol{A}_{1}(\boldsymbol{r})$ . The introduction of the magnetic field only changes the $t_{\sigma_{1}\sigma_{2}}\left(\boldsymbol{R}_{i},\boldsymbol{R}_{j}\right)$ without introducing a time dependency. Therefore, we may assume that a magnetic field is always present without any loss of generality for the following discussion while keeping in mind that its introduction broke translation invariance. Since the magnetic field only affects the hopping parameters, from now on, the term in the vector potential that provides the electric field will be denoted by $\boldsymbol{A}(t)$ . The external perturbation is obtained by expanding the exponential in equation (20) and identifying the original Hamiltonian.

3.1.1. Expansion of the external perturbation.

Expanding the exponential in equation (20) yields an infinite series of operators for the full Hamiltonian:

Equation (22)

from which we identify, after a Fourier transform,

Equation (23)

Repeated spatial indices are understood to be summed over. We have defined

Equation (24)

where $\hat{\boldsymbol{r}}$ is the position operator. In first order, $\hat{h}^{\alpha}$ is just the single-particle velocity operator. Under PBC, the position operator $\hat{\boldsymbol{r}}$ is ill-defined but its commutator with the Hamiltonian is not. In real space, this commutator is simply the Hamiltonian matrix element connecting the two sites i and j  multiplied by the distance vector $\boldsymbol{d}_{ij}$ between them. If we define this distance vector as the distance between neighbors instead of the difference of the two positions, it will be well defined in PBC. Using this strategy, all the $\hat{h}$ operators may be evaluated in position space by assigning to each bond the Hamiltonian matrix element multiplied by the required product of difference vectors $h_{ij}^{\alpha_{1}\cdots\alpha_{n}}=\left({\rm i}\hbar\right){}^{-n}H_{ij}d_{ij}^{\alpha_{1}}\cdots d_{ij}^{\alpha_{n}}$ .

In figure 3, we see how the diagrammatic representation of the external perturbation unfolds into an infinite series of external fields. The wavy line represents $\tilde{\boldsymbol{A}}$ and the number of external fields connected to the same point is the number of commutators in equation (24).

Figure 3.

Figure 3. Diagrammatic representation of the external perturbation.

Standard image High-resolution image

3.1.2. Expansion of the current.

The current operator is calculated directly from the Hamiltonian, using $\hat{J}^{\alpha}=-\Omega^{-1}\partial H/\partial A^{\alpha}$ ($\Omega$ is the volume of the sample), which also follows a series expansion due to the presence of an infinite number of $\boldsymbol{A}(t)$ in $H_{{\rm ext}}$ :

Equation (25)

Figure 4 depicts the diagrammatic representation of this operator in Fourier space.

Figure 4.

Figure 4. Diagrammatic representation of the current operator. The single small circle is to be understood as a Dirac delta.

Standard image High-resolution image

The complexity of this expansion becomes clear. In equation (7), both the current operator and the Green's functions follow a perturbation expansion. Furthermore, each interaction operator in every one of the terms in the Green's function expansion also follows a similar expansion. We now have all the objects needed for the perturbative expansion of the conductivity.

3.2. Perturbative expansion of the conductivity

In the previous sections we laid out the expressions for each individual operator in our expansion and represented their corresponding diagrammatic depictions. In this subsection, we put together all the elements of the previous sections to provide the full diagrammatic representation of the first and second-order conductivities. This expansion closely resembles that of [16] but has several differences due to the usage of these specific Green's functions. The only thing left to do is to replace the perturbed objects in the diagrammatic representation of the expected value of the current operator by their expansions. It is straightforward to see how the diagrams fit together in figure 5, which shows all the contributing diagrams up to second order.

Figure 5.

Figure 5. Expansion of the expected value of the conductivity in (a) first and (b) second order.

Standard image High-resolution image

Obtaining the conductivity from the current is a matter of expressing the frequencies $\omega'$ , $\omega''$ and $\omega'''$ in terms of $\omega_{1}$ , $\omega_{2}$ and $\omega$ and using $\boldsymbol{E}\left(\omega\right)={\rm i}\omega\boldsymbol{A}\left(\omega\right)$ . The Dirac delta in equation (8) simply means that $\omega$ is to be replaced by the sum of external frequencies entering the diagram. Thus, the nth order conductivity may be found using the following rules:

  • 1.  
    Draw all the diagrams with n wavy lines coming in the diagram, one going out and one dashed interconnecting line. Integrate over the internal frequencies and ignore the conservation of momentum in the vertex containing $\omega$ , as that is already taken into account by the Dirac delta in the definition of the conductivity.
  • 2.  
    Reading clockwise starting from the vertex containing $\omega$ , insert, by order, a generalized velocity operator $h^{\alpha_{1}\cdots\alpha_{k}}$ at each vertex and a Green's function at each edge. Each $\alpha_{i}$ is the label of a frequency line connecting to the vertex. If the edge is a dashed line, the Green's function is ig<. All the edges before that correspond to igR and the ones after it to igA. Trace over the resulting operator.
  • 3.  
    Multiply by $\Omega^{-1}{\rm e}^{n+1}\prod\nolimits_{k=1}^{n}\left({\rm i}\omega_{k}\right){}^{-1}\left({\rm i}\hbar\right){}^{1-N}$ , where n is the number of dashed lines and N is the number of interconnecting lines. For each vertex, divide by the factorial of the number of outgoing lines.

Following these rules and replacing ig< by equation (12), the first-order conductivity is found:

Equation (26)

Similarly, for the second-order conductivity:

Equation (27)

The procedure is exactly the same for the nth order conductivity, which will have $2^{n-1}(n+2)$ diagrams. The higher-order expansions will not be obtained because a realistic computation of physical quantities with those formulas using spectral methods would require tremendous computational power. This point will be further explained in the next section. Despite these limitations, we have provided the required framework for obtaining those formulas, which might be useful in the future.

4. Spectral methods

From the previous section, it becomes clear that the only objects needed to calculate the conductivity up to any order are the retarded and advanced Green's functions, Dirac deltas and the generalized velocity operators. The meaning of these functions is obvious in the energy eigenbasis of H0, but not in the position basis that we ultimately want to use. Therefore, these objects are expanded in a truncated series of Chebyshev polynomials. This choice of polynomials allows for a very efficient and numerically stable method of computing these functions [17]. The fact that the Dirac deltas and Green's functions have singularities means that their expansions will be plagued by Gibbs oscillations. Some methods like KPM add a weight function [18, 19] to each term in the expansion, effectively damping the oscillations. Let us take as an example the Green's function. Using a kernel, as more polynomials are added to the expansion, the expansion becomes closer to the exact Green's function, and so more singular. Although the behavior approaches that of a Green's function as more polynomials are added, we cannot speak of convergence in the usual sense. In order to assess the convergence properties of our method, we will not use a kernel, but instead use a finite imaginary broadening parameter inside the Green's function, that is ${\rm i}0^{+}\rightarrow {\rm i}\lambda$ . For a finite $\lambda$ , the function is no longer singular and so we can expect the expansion to converge within a given accuracy after enough polynomials have been added. In this paper, we will use an exact decomposition of the Green's function [20] in terms of Chebyshev polynomials in order to be able to evaluate the convergence of our method. The term $\hbar/\lambda$ may also be interpreted as a phenomenological relaxation time due to inelastic scattering processes and therefore may be adjusted to reflect this fact.

In this section, we introduce the Chebyshev polynomials and use them to expand the Green's functions and Dirac deltas of the previous formulas. This will cast those formulas into a more useful form.

4.1. Expansion in Chebyshev polynomials

The Chebyshev polynomials of the first kind are a set of orthogonal polynomials defined in the range $\left[-1,1\right]$ by

Equation (28)

They satisfy a recursion relation

Equation (29)

Equation (30)

Equation (31)

and the following orthogonality relation

Equation (32)

These polynomials may be used to expand functions of the Hamiltonian provided its spectrum has been scaled by a factor $\Delta_{E}$ to fit in the range $\left[-1,1\right]$ . The only functions of the Hamiltonian appearing in the expansion are Dirac deltas and Green's functions, which have the following expansions in terms of Chebyshev polynomials:

Equation (33)

Equation (34)

where

Equation (35)

and

Equation (36)

The function $g^{\sigma,\lambda}$ encompasses both retarded and advanced Green's functions in the limit $\lambda\rightarrow0^{+}$ : $g^{+,0^{+}}$ is the retarded Green's function and $g^{-,0^{+}}$ the advanced one. The operator part has been completely separated from its other arguments. All the Dirac deltas and Green's functions may therefore be separated into a polynomial of H0 and a coefficient which encapsulates the frequency and energy parameters. The trace in the conductivity now becomes a trace over a product of polynomials and $\hat{h}$ operators, which can be encapsulated in a new object, the $\Gamma$ matrix:

Equation (37)

The upper indices in bold stand for any number of indices: $\boldsymbol{\alpha}_{1}=\alpha_{1}^{1}\alpha_{1}^{2}\cdots\alpha_{1}^{N_{1}}$ . Here we have used $\tilde{h}^{\boldsymbol{\alpha}_{1}}=\left({\rm i}\hbar\right){}^{N_{1}}\hat{h}^{\boldsymbol{\alpha}_{1}}$ rather than $\hat{h}$ to avoid using complex numbers when the Hamiltonian matrix is purely real in our numerical simulations. It's very important to keep in mind that these new operators are no longer hermitian. The commas in $\Gamma$ separate the various $\tilde{h}$ operators. N is the number of unit cells in the sample being studied and ensures that $\Gamma$ is an intensive quantity. Some examples:

Equation (38)

Equation (39)

Equation (40)

The $\Gamma$ matrix only depends on the physical system itself as it is merely a function of the Hamiltonian and the $\tilde{h}$ operators. The coefficients of the Chebyshev expansion may similarly be aggregated into a matrix, which we denote by $\Lambda$ . Some examples:

Equation (41)

Equation (42)

and

Equation (43)

In terms of these new objects, the conductivities become

Equation (44)

in first order and

Equation (45)

in second order. $\Omega_{c}$ is the volume of the unit cell.

4.2. Considerations on the numerical storage of $ {\Gamma} $

Naturally, one cannot expect to sum the entire Chebyshev series, so it has to be truncated at a certain number of polynomials $N_{{\rm max}}$ . Each of the entries in a $\Gamma$ matrix represents a complex number. Numerically, this is represented as two double-precision floating-point numbers, each taking up 8 bytes of storage. The amount of storage needed to store a $\Gamma$ matrix of dimension n is 16$N_{{\rm max}}^{n}$ . The number of Chebyshev polynomials needed to obtain a decent resolution depends heavily on the problem at hand, but a typical number may be $N_{{\rm max}}=1024$ . A one-dimensional $\Gamma$ matrix would take up 16 KiB of storage, a two-dimensional matrix 16 MiB and a three-dimensional matrix 16 GiB. Three-dimensional matrices appear in the second-order conductivity. The third-order conductivity would require a four-dimensional matrix and as such, 16 TiB of storage. Numbers like these make it unrealistic to go beyond second order conductivity.

5. Numerical results

In this section we showcase several examples, of increasing complexity, to compare our formalism with the literature. Starting with graphene, we compute the linear optical conductivity and verify that it agrees perfectly with the $\boldsymbol{k}$ -space formalism. Breaking the sublattice symmetry with gapped graphene, we are able to obtain the second-order conductivity and check that it too agrees perfectly. This proves that our method is able to accurately reproduce the existing results. Then, we show two examples that cannot be reproduced easily with the $\boldsymbol{k}$ -space formalism: second harmonic generation in gapped graphene with Anderson disorder and vacancies of varying concentration. Finally, the convergence properties are evaluated and the efficiency of the method is discussed.

5.1. Linear optical response in graphene

Let a be the distance between consecutive atoms in the honeycomb lattice. Then, the primitive vectors between unit cells are (see figure 6)

and the distance vectors between nearest neighbors are

Figure 6.

Figure 6. Honeycomb lattice and choice of primitive vectors.

Standard image High-resolution image

The area of the unit cell is $\Omega_{c}=\frac{3\sqrt{3}}{2}a^{2}$ . Starting from equation (19), the graphene Hamiltonian is obtained by invoking translational invariance of the unit cell $t_{\mu\nu}\left(\boldsymbol{R}_{m},\boldsymbol{R}_{n}\right)\,=$ $t_{\mu\nu}\left(\boldsymbol{R}_{m}-\boldsymbol{R}_{n}\right)$ and

The remaining non-zero hopping integrals are found by using $t_{AB}=t_{BA}$ . The on-site energies $t_{AA}\left(\boldsymbol{0}\right)$ and $t_{BB}\left(\boldsymbol{0}\right)$ are taken to be zero. A factor of two is included due to spin degeneracy.

These parameters were used to obtain the first-order optical conductivity for graphene, as seen in figure 7. We used a lattice with 4096 unit cells in each direction and 2048 Chebyshev moments in the expansion. The resulting plot is compared to the results obtained in [8] through $\boldsymbol{k}$ -space integration of a translation-invariant system. The curves are indistinguishable.

Figure 7.

Figure 7. First-order longitudinal yy conductivity for graphene in units of $\sigma_{0}={\rm e}^{2}/\hbar$ . Hopping parameter: $t=2.33~{\rm eV}$ , temperature: $T=2.33~{\rm mK}$ , chemical potential: $\mu=0.466~{\rm eV}$ , broadening parameter: $\lambda=38.8~{\rm meV}$ , number of Chebyshev moments used: M  =  2048, lattice size: $L=4096\times4096$ . The solid curves represent the optical conductivity obtained by KITE (real part in green, imaginary in blue). The superimposed dashed lines are obtained in [8].

Standard image High-resolution image

5.2. Gapped graphene

The only difference relative to regular graphene is found in the on-site energies. Let $t_{AA}\left(\boldsymbol{0}\right)=\Delta/2$ and $t_{BB}\left(\boldsymbol{0}\right)=-\Delta/2$ . With the opening of a sizeable gap, the effects of excitons become relevant. In this simplified model, we ignore these effects to focus on the intrinsic second-order respose of the material. In these conditions, the one- and three-dimensional $\Gamma$ matrices are identically zero2, so the second-order conductivity may be calculated resorting only to two-dimensional $\Gamma$ matrices. The calculation is thus simplified tremendously because the second-order conductivity reduces to

The indices $n,m$ are understood to be summed over. The photogalvanic effect may be reproduced from this formula by setting $\omega_{1}=\omega=-\omega_{2}$ and the numerical results are shown in figure 8. Again, we used 4096 unit cells in each lattice direction and 2048 Chebyshev moments and compare the results with the ones obtained by integrating in $\boldsymbol{k}$ -space, just like in the previous subsection. For convenience, we define the constant $\sigma_{2}={\rm e}^{3}a/4t\hbar$ [21] in terms of the hopping integral t and the lattice parameter a.

Figure 8.

Figure 8. Second-order yyy photogalvanic effect for gapped graphene. Hopping parameter: $t=2.33~{\rm eV}$ , temperature: $T=0~{\rm K}$ , chemical potential: $\mu=0~{\rm eV}$ , gap $\Delta=7.80~{\rm eV}$ broadening parameter: $\lambda=39~{\rm meV}$ , number of Chebyshev moments used: M  =  2048, lattice size: $L=4096\times4096$ . The imaginary part disappears after the result is properly symmetrized.

Standard image High-resolution image

This particular example benefits considerably from the cancellation of the most complicated objects that needed to be calculated. In appendix A.1 we present an example with less symmetry that confirms the complete agreement between our method and the $\boldsymbol{k}$ -space formalism.

5.3. Photogalvanic effect in gapped graphene with Anderson disorder

Our formalism does not rely on translation invariance, and so may be used to study disordered systems. To show this, we now introduce to gapped graphene a simple model for disorder by letting each atomic site have a random local energy taken from a uniform distribution $\left[-W/2,W/2\right]$ (Anderson disorder [22]):

where $\boldsymbol{R}$ is the position of the unit cell and $\sigma$ labels the atoms inside each unit cell. The presence of disorder is expected to smooth out the sharp features of the optical response. As disorder increases, we should see a decrease in conductivity due to Anderson localization. This is the exact behavior that is seen in figure 9 where we plot the photogalvanic effect in gapped graphene in the presence of Anderson disorder of varying strength. Some fluctuations exist at the features, which are expected to disappear as the system size approaches the thermodynamic limit.

Figure 9.

Figure 9. Photogalvanic effect for gapped graphene in the presence of Anderson disorder of varying strength W and a broadening parameter of $\lambda=23~{\rm meV}$ . The parameters are the same as for figure 8 except for the number of polynomials, which is M  =  512. The dashed lines represent the imaginary part of the conductivity.

Standard image High-resolution image

As expected, the introduction of Anderson disorder produces a broadening of the sharp features of the nonlinear optical conductivity. This broadening also means that there will be a larger response to the external electric field at frequencies smaller than the gap.

The large oscillations near the origin reveal something interesting about the numerical details of our formalism. Equation (27) is comprised of a complicated sum of several terms. Individually, some of these terms may be very large, but there may be cancellations among them. For each of these terms, the Chebyshev expansion is exact in the limit of infinite polynomials. For a finite number of polynomials, there will be slight differences between the exact result and the expansion, and if the exact result is very large, this difference will be considerable. It is highly unlikely that this difference will be the same for each term, and so their sum may not cancel out in the end. This is the typical behavior at the lower-frequency regime that is related to the singularities that plagued the velocity gauge approach. This has been discussed since the early work of Sipe and challenged for a long time the equivalence between the velocity and length gauges [4, 8, 10]. This effect could be fully mitigated by greatly increasing the number of polynomials, but here we are interested in the finite frequency behavior.

5.4. Second-harmonic generation of gapped graphene with vacancies

In realistic samples, vacancies and impurities may exist due to imperfections in the fabrication process, as well as other more complex structural defects. In this section, we show that our method allows us to obtain the second-harmonic generation of a system with structural disorder. Using equation (27), we show in figure 10 the effect of vacancies of varying concentration in the SHG of gapped graphene. Unlike Anderson disorder, the addition of vacancies to the system does not change the gap. Their most noticeable effect is to flatten the features of the second-harmonic generation. As discussed in the previous section, the lower frequencies are dominated by oscillations and would require many more polynomials to fully converge. Therefore, we omit that region and only represent the remaining regions, which have already converged within the desired accuracy.

Figure 10.

Figure 10. Second-harmonic generation in gapped graphene for a varying concentration of vacancies and $\lambda=2.3~{\rm meV}$ . The blue (red) curves represent the real (imaginary) part of the conductivity. The darker curves have a higher concentration of vacancies. System size: L  =  2048, number of polynomials: M  =  512. All the other parameters are the same as in figure 9.

Standard image High-resolution image

5.5. Considerations on convergence and accuracy

In this section, we briefly discuss some convergence properties of our method. For a more thorough discussion, see [17]. The convergence to the exact value depends on several factors:

  • 1.  
    Spectral methods rely on the self-averaging properties of random vectors, yielding an associated variance. The error bar decreases as $1/\sqrt{N_{R}N}$ , where NR is the number of random vectors and N is the size of the sample.
  • 2.  
    In the thermodynamic limit of an infinite lattice, the spectrum becomes continuous and so we expect the conductivity curve to be smooth. However, the systems used in simulations are finite and so have a typical energy level spacing, which we denote by $\delta\varepsilon$ . This has important consequences for the resolution. Details characterized by a smaller energy scale than that of $\delta\varepsilon$ are meaningless because they cannot be distinguished from the contribution of individual energy levels. The maximum resolution is therefore limited by the energy level spacing. For our concrete examples with the honeycomb lattice, we use $\delta\varepsilon=3\pi t/L$ , the energy level spacing at the Dirac point in graphene for a system of linear dimension L.
  • 3.  
    The resolution may be controlled through $\lambda$ , the broadening parameter of the Green's functions. Energy differences smaller than $\lambda$ become indistinguishable from one another. On the one hand, a small $\lambda$ is required in order to resolve the sharp features of the curve accurately. On the other hand, when $\lambda\lesssim \delta\varepsilon$ , the discrete nature of the spectrum starts to become visible through the roughness of the curve. For sufficiently small $\lambda$ , the expected sharp features of the curve become indistinguishable from the contributions of the individual energy levels. If these issues are not solved, they become a major source of systematic error in the final results. Therefore, if we want to see the expected thermodynamic limit, we have to ensure $\lambda\gtrsim\delta\varepsilon$ .

In figure 11, the yy optical conductivity of graphene is represented for several values of $\lambda$ . In this example, $\delta\varepsilon=5.3~{\rm meV}$ . As $\lambda$ is decreased, the curve becomes sharper, but when $\lambda=2.3~{\rm meV}$ the discreteness of the spectrum starts to become noticeable through the roughness of the curve. It is starting to diverge from the expected smooth curve of the thermodynamic limit.

Figure 11.

Figure 11. First-order optical yy conductivity per spin of graphene for $M=16\,384$ and L  =  2048 now as a function of the broadening parameter $\lambda$ . The remaining parameters remain the same as for figure 7. The solid (dashed) curves represent the real (imaginary) part of the conductivity. The legend shows the values for the broadening parameter. The lower inset shows the evolution of the value of the conductivity for each $\lambda$ as the number of polynomials is increased for $\hbar\omega=2.33~{\rm eV}$ . The upper inset shows the same thing but for several very close frequencies around $\hbar\omega=4.66~{\rm eV}$ . The darker curves correspond to higher frequencies.

Standard image High-resolution image

In the lower inset, we study the convergence as a function of the number of polynomials at $\hbar\omega=4.66~{\rm eV}$ , a region of rapidly changing conductivity. The smaller the $\lambda$ , the more polynomials are required in order to have a fully converged result. Within the accuracy $\delta\sigma/\sigma_{0}\simeq0.1$ , all the curves have already converged at $1.6\times10^{4}$ polynomials. These calculations were repeated for several different initial random vectors. In the plot we show only one of these calculations. The error bar associated with the random vectors is too small to be distinguished from the curves themselves.

In the upper inset, we do the same thing, but now in a very small region around $\hbar\omega=2.33~{\rm eV}$ , a region of slowly increasing conductivity. The plot shows three sets of curves with different colors. Inside each set, we represent a collection of frequencies, ranging from $\hbar\omega=2.3300~{\rm eV}$ to $\hbar\omega=2.3316~{\rm eV}$ . The darker curves correspond to higher frequencies. The main graph shows that all these curves have converged to the same value in a region of slowly increasing conductivity. The inset, however, shows a different picture. The red ($\lambda=23~{\rm meV}$ ) and black ($\lambda=230~{\rm meV}$ ) sets of curves show a variation consistent with the expected increasing conductivity. If one zooms in to those sets of curves, it is possible to check that they are indeed increasing in value as $\omega$ increases. The green curve ($\lambda=2.3~{\rm meV}$ ) is not only changing in a scale much larger than expected, but it is also decreasing. This variation comes from the individual contribution of the energy levels, not from features of the conductivity and is therefore artificial. Within the accuracy $\delta\sigma/\sigma_{0}\simeq10^{-3}$ each of these curves has completely converged at $1.6\times10^{4}$ polynomials but this level of accuracy is meaningless for $\lambda=2.3~{\rm meV}$ . The error bars are not shown for clarity, but their values are the following: at $\lambda=230~{\rm meV}$ , $\delta\sigma/\sigma_{0}=10^{-3}$ ; at $\lambda=23~{\rm meV}$ , $\delta\sigma/\sigma_{0}=3\times10^{-3}$ ; at $\lambda=2.3~{\rm meV}$ , $\delta\sigma/\sigma_{0}=5\times10^{-3}$ . At this scale, the error bars are comparable to the variation due to the number of polynomials and to the value of $\lambda$ .

These frequencies were chosen to compare the conductivity in a place where it is expected to converge quickly and another where it is expected to converge slowly. Looking at these graphs, it is possible to estimate how many polynomials are required to converge to the final value of the conductivity for the specified parameter $\lambda$ within a given accuracy. A rough estimate of the scaling is given by $N\sim\lambda^{-1}$ .

In sum, given a fixed resolution $\lambda$ , the number of polynomials should be large enough to ensure that the curves have converged, and the system size L should be large enough to ensure that the discreteness of the spectrum cannot be seen.

A similar analysis may be done for the second-order conductivity. We will not present it here for two reasons. Firstly, the main points of the previous paragraphs remain the same. Secondly, we cannot do such an analysis because the computational cost would be tremendously higher.

5.6. Considerations on efficiency

Our formalism provides a very general framework with which to compute the nonlinear optical response up to any order. Once the formulas were obtained, we chose to use spectral methods to perform the computation. This is not always the most efficient approach: for systems with translation invariance and periodic boundary conditions, we can specify the formulas for $\boldsymbol{k}$ -space and then perform the explicit integration. Then, for a given set of parameters (temperature, broadening, Fermi energy) the computation time will scale as $L^{D}N_{\omega}$ where LD is the number of points in the Brillouin zone (which is also the number of lattice sites), D is the dimensionality and $N_{\omega}$ is the number of frequencies we want to compute. For each $\boldsymbol{k}$ and each frequency, this method comes down to diagonalizing the Bloch Hamiltonian $H_{\boldsymbol{k}}$ , and then summing over the whole set of $\boldsymbol{k}$ points. This method is extremely efficient at computing the optical conductivity at any order using the velocity gauge.

Using spectral methods, the computation is split into the calculation of the Chebyshev moments and the final matrix product of the $\Gamma$ matrices with the $\Lambda$ matrices. The first part is the most demanding and is independent of the parameters mentioned above. Its computation time scales as $L^{D}N^{n+1}$ , where n is the order of the conductivity and N is the number of Chebyshev polynomials. More concretely, if we want to calculate the conductivity for a certain $\lambda$ , using $N\sim\lambda^{-1}\sim N_{\omega}$ , we find that the $\boldsymbol{k}$ -space calculation scales much more favorably.

If the system has no translation invariance, $\boldsymbol{k}$ -space integration is no longer useful and we would need to numerically diagonalize the full Hamiltonian. This method scales as $L^{3D}N_{\omega}$ which is highly unfavorable and because of that we would be limited to very small systems. In this context, spectral methods become the preferred choice.

For the examples used in this paper, the computation of the second-order conductivity with the $\boldsymbol{k}$ -space formalism in a system with L  =  2048 took around 2 min on a Xeon E5-2650 with 16 threads. In comparison, the same computation took 3 h for translation-invariant gapped graphene with 2048 polynomials, and 70 h for gapped graphene with Anderson disorder/vacancies and 512 polynomials. Despite the discrepancy in computational efficiency, we know of no other more efficient way to compute the nonlinear optical conductivity for disordered systems.

6. Conclusions

We developed an out-of-equilibrium expansion of the non-linear optical response of non-interacting systems using the Keldysh formalism and expressed everything in terms of traces of operators. This provides a basis-independent expression for the linear and non-linear optical conductivities. This drops the requirement of translation invariance and allows us to include magnetic fields and disorder in our tight-binding calculations. We also provide a diagrammatic representation of this expansion, which makes it a very straightforward process to obtain those expressions.

The expressions for the non-linear conductivities are calculated numerically with resort to an expansion in Chebyshev polynomials and a stochastic evaluation of the trace, in close resemblance to the Kernel Polynomial Method (KPM). We provide the mapping that takes the aforementioned expressions and converts them to a numerically-suited object to be calculated with spectral methods. This is only possible because of the careful way in which these expressions were constructed in the first place.

We built an open-source software that is able to calculate the first- and second-order optical conductivities of very large 2D tight-binding systems (1010 atoms) with disorder and magnetic fields. This software is used to obtain the first- and second-order conductivities of graphene and gapped graphene. These same quantities were calculated independently with the usual integration in $\boldsymbol{k}$ -space and the results are in great agreement, proving the validity of our method. Finally, we show two examples with disorder that cannot be treated under the usual $\boldsymbol{k}$ -space formalism: gapped graphene with Anderson disorder and vacancies.

We briefly discuss the convergence properties of this method by analyzing how the curves change as the resolution is increased. The resolution is controlled by $\lambda$ , the broadening parameter of the Green's functions and is limited by $\delta\varepsilon$ , the energy level spacing of the system. The expected thermodynamic limit is obtained by decreasing $\delta\varepsilon$ (increasing the system size) while ensuring $\delta\varepsilon\lesssim \lambda$ .

For systems with translation invariance, the $\boldsymbol{k}$ -space integration is very quick and is preferred over our method. If the systems do not have this property, this method becomes the most efficient one to calculate the second-order conductivity with disorder. This paper serves as a proof of concept and the effects of realistic disorder on the nonlinear optical properties of 2D materials will be explored in a future paper.

Acknowledgments

The work of SMJ is supported by Fundação para a Ciência e Tecnologia (FCT) under the Grant PD/BD/142798/2018. The authors acknowledge financing of Fundação da Ciência e Tecnologia, of COMPETE 2020 program in FEDER component (European Union), through projects POCI-01-0145-FEDER-028887 and UID/FIS/04650/2013. The authors also acknowledge financial support from Fundação para a Ciência e Tecnologia, Portugal, through national funds, co-financed by COMPETE-FEDER (Grant M-ERA-NET2/0002/2016—UltraGraf) under the Partnership Agreement PT2020. We would also like to thank JMB Lopes dos Santos, JPS Pires, GB Ventura, DJ Passos, B Amorim and D Parker for their helpful comments and suggestions regarding this paper.

Appendix

A.1. Sublattice displacement

The calculation of the photogalvanic effect for gapped graphene was very efficient due to the cancellation of the three-dimensional $\Gamma$ matrices. In this appendix, we provide an extra example, which does not benefit from that property. By changing the relative position of the two sublattices, we are able to obtain non-zero values in all the $\Gamma$ matrices, which enables us to test the remainder of the formula. All the hopping parameters in this system are exactly the same as in regular gapped graphene. The only difference is in the distance between atoms, which changes the velocity operators while keeping the Hamiltonian intact (See figure A1).

Figure A1.

Figure A1. Displaced honeycomb lattice and choice of primitive vectors.

Standard image High-resolution image

The primitive vectors are identical, but the nearest-neighbor vectors are different:

One of the sublattices was translated in the y  direction by a/2. The second-order xxx conductivity remains zero, but now the xxy photogalvanic effect is no longer zero and can be seen in figure A2. The lattice size and number of polynomials used was reduced to 1024 and 512 respectively, due to the greatly increased computational cost. At lower frequencies, the results start to diverge because there are not enough polynomials to resolve this region. The results are in great agreement with the ones obtained by $\boldsymbol{k}$ -space integration. The small oscillations in the imaginary part are expected to disappear as the number of polynomials is increased.

Figure A2.

Figure A2. Second-order xxy photogalvanic effect for displaced gapped graphene. Hopping parameter: $t=2.33~{\rm eV}$ , temperature: $T=0~{\rm K}$ , chemical potential: $\mu=0~{\rm eV}$ , gap $\Delta=7.8~{\rm eV}$ broadening parameter: $\lambda=39~{\rm meV}$ , number of Chebyshev moments used: M  =  512, lattice size: $L=1024\times1024$ .

Standard image High-resolution image

Footnotes

  • Fourier convention: $f(t)=(2\pi){}^{-1}\int {\rm d}\omega {\rm e}^{-{\rm i}\omega t}\tilde{f}\left(\omega\right)$ .

  • These matrices were explicitly calculated in the k basis and shown to be exactly zero.

Please wait… references are loading.
10.1088/1361-648X/ab59ec